text
large_stringlengths
1k
155k
id
large_stringlengths
47
47
dump
large_stringclasses
95 values
url
large_stringlengths
16
401
file_path
large_stringlengths
125
155
language
large_stringclasses
1 value
language_score
float64
0.65
0.99
token_count
int64
500
36.8k
score
float64
2.52
4.72
int_score
int64
3
5
raw_reasoning_score
float64
0.3
3.25
reasoning_level
int64
0
3
interpretation
stringclasses
4 values
topic
stringclasses
23 values
Using photochemical processes, LED and laser light inserts bio-photons into damaged cells. The cells begin to produce energy (ATP), which improves their function, assists their division, strengthens the body’s immune system, and causes the secretion of various hormones. When the tissues are healed, the pain decreases or disappears. If damaged cells have died, the bio-photons help the division of neighboring cells, generating new tissues, and thus bring about healing. - Low Level Laser Therapy promotes healing in many conditions because it penetrates the skin, increases the ATP and activates enzymes in the targeted cells. - Growth factor response within the cells and tissue as a result of increased ATP and protein synthesis. - Improved cell proliferation. - Pain relief as a result of increased endorphin release. - Strengthening the immune system response via increasing levels of lymphocyte activity. I use Low Level Laser Therapy along with Acupuncture, Herbs(internal & external), Moxa/TDP Therapy, Magnets, Cupping and Massage to achieve on average, half the recovery time, of traditional post injury or surgical therapy. How it works Pulsed Infrared Laser Radiation penetrates 10-13cm (4-5 inches) deep and has strong stimulating effects on blood circulation membranes and intracellular metabolism. Continuous incoherent infrared radiation penetrates more shallow tissue depth and has an overall broader spectrum in comparison to laser radiation. Red LED light penetrates smaller tissue depth and has beneficial anti-inflammatory effect. The magnetic field provides energy-mediated protection of organism against environmental impacts: climatic factors and electromagnetic field. The magnetic field also keeps ionized molecules of tissue in a dissociated stage, thus enhancing the energy potentials at the molecular and cellular levels. Peak pulse power of up to 25,000 mW and average power of ~7.5 mW – achieves high depth of tissue penetration while providing gentle average power levels. TerraQuant for Sports Medicine: A Safe and Effective way to recover from Sports Injuries. The most popular use of laser and LED therapy is the treatment of sports injuries, soft tissue, and joint conditions. Found below are a few reasons why many of the world’s professional athletes use laser therapy to speed up their recovery. There is more published clinical and physiological evidence supporting the use of Laser and LED photobiomodulation for soft tissue injuries and joint conditions than any electrotherapy modality as traditionally used by physiotherapists. - It’s quick and simple to apply. - Can be used immediately after injury, over pins and plates. - It’s considered the safest “electrotherapy” available by research experts. - Has a proven worldwide track record of effective and safe treatments for Pain Management.
<urn:uuid:de4aa8cb-bc72-424d-b8a0-bb49554cf12a>
CC-MAIN-2017-34
https://drpendleberry.com/low-level-cold-laser-acupuncture-therapy/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104681.22/warc/CC-MAIN-20170818140908-20170818160908-00643.warc.gz
en
0.910818
582
2.65625
3
2.59364
3
Strong reasoning
Health
Last Updated on Language labs are an excellent way to simulate conversation for your students, as they’ll provide all the equipment the students need to hear and speak. However, setting up a language lab is extremely expensive, especially if you have to buy all the tools yourself. Luckily, there are plenty of less-costly alternatives that work just as well for speaking practice. If you’re looking for a new way to get your students talking, try these methods that won’t break the bank. What’s in the Bag? One activity you can try with your students involves hiding an object inside a bag. Then, give one student the chance to look inside the bag and see what it is. Once they get a good look, they’ll have to describe it to the rest of the class. The other students will then try to guess what the object is by asking the person who saw it. Whoever guesses it right becomes the next person to describe a new hidden object. Play YouTube Videos Just because you don’t have a full language lab to work with, doesn’t mean you can’t use technology in the classroom. For example, playing YouTube videos on an in-room projector is a great way to get the whole class involved in speaking English. If your classroom doesn’t have a projector, no worries – just gather everyone in a circle and use your phone to play the video. There are numerous channels available that have full English conversations for listening and replicating, including Daily English Conversation and Eko Languages. Play a conversation video and have students mimic the conversation after it’s done, or have them continue where the conversation left off. Assign students to work in pairs so they can focus on one-on-one interaction. Just making your students ask each other questions in a circle can get a bit tedious, so this activity kicks things up a notch for more excitement. Print out a list of questions from this large repository and cut them into tiny strips. Then, place them all in a bag and let your students each pick out one. Your students will then have to mingle through the classroom and ask everyone the question they’ve chosen. Have them write down or remember their favorite answer so they can share it after they’ve talked to everyone. It’s a good way to practice second-person questions (What is your favorite color?) and then third-person statements (Her favorite color is magenta). Act out a Skit Let your students channel their inner Brad Pitt or Angelina Jolie by asking them to act out little skits in English. Give them a situation to focus on, such as ordering food at a restaurant or seeing an old friend on the street. Then, let them use their own creativity to prepare the dialogue and setting. What might begin as a mundane scenario might quickly escalate into a hilarious course of events if your students come up with something fun. And when students are having fun, they’ll forget they’re in a classroom and become more natural with their speech. Students (especially children) learn much better when they’re moving around, so try to plan as many mobile activities as possible. One such activity is zombie questions. In this game, one person will be the zombie, and the rest of the class will stand in a circle around them. The zombie will then pick one person to move towards and will ask them a question, and that person must shout out an answer in English to stay safe. If they are unable to speak a sentence by the time the zombie reaches them, then they become the zombie and must hunt down victims. The question could be whatever area of English you’re covering, such as “What is your name?” or “What’s your favorite food?” Switch it up every couple of rounds to keep your students on their toes. Two Truths, One Lie This is a simple game, just like the one you’ve probably played yourself at some point. Have every student write down two truths and one lie, and then let the classroom decide what is true and what is false. These are just a few of the free alternatives to a language lab. In the end, you have to see what works well with your particular students. Experiment a little with these activities, and you’ll likely begin to see a big improvement in your students’ skills.
<urn:uuid:466b1f88-8818-4577-a518-433c460e113e>
CC-MAIN-2020-10
https://extemporeapp.com/free-alternatives-language-labs-get-students-talking/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145839.51/warc/CC-MAIN-20200223185153-20200223215153-00121.warc.gz
en
0.951813
921
3.484375
3
1.580112
2
Moderate reasoning
Education & Jobs
Computer programming is the process of coding that controls the computers and their software. The codes instruct the computer to perform tasks that it is programmed to do, whether simple or complex. A programmer has a task to look for and fix code problems. Computers have programming languages. Through these languages, they get work done. Of course, if your essay is related to programming you could use some information provided below but if you do not have time on writing the best US Essays Writers are ready to provide help at any time. Computer Programming Careers Computer programming does not offer as many career options. With the average salary of a programmer being about $79,084, you can find work in fields ranging from an analyst in computer systems to a data admin. Other jobs like animation, graphic design, and video game development are more fun and creative. Computer programming matches any personality from introverts, extroverts, collaborators, and even loners. Degree vs. Self-Taught There are pros and cons to this. Self-teaching of computer programming saves one a lot of cash and time. Since you are learning at your speed, you will master skills either faster or slower than someone in college. Freedom of choosing what and when to learn is inevitable too. The con is that some employers may not hire you because of a lack of a degree. Although having experience and a stunning portfolio will pull them towards you. Going through college, on the other hand, and having a degree is a journey. Even though it takes longer, getting a degree from a five-star institution such as MIT will get you a job from the word go. You may not work at your own pace while pursuing a degree, but you will have access to experts and better knowledge. Nevertheless, you will have a higher salary with a degree. Potential employers will tick you above those without a degree. Attaining a degree in Computer Programming There are no degrees in this field. Most programmers opt for the computer science degree, which is a similar field and has the skill set needed for programming. For instance, if you want to be a game developer, you can start by teaching yourself how to code through boot camps. Once in college, get yourself a major, which in most cases, will be computer science. An accompaniment for your major, if you are in for video games, would be a minor in maybe graphic design or creative writing. Notre that in the field of computer programming, a degree is not entirely a requirement. If you know how to code, you are as well capable of getting these jobs done as someone with a degree is. Graduate or Undergraduate? You don’t have to go for a graduate degree, it is just fancy, and will slightly put you ahead of the competition. An undergraduate degree will attract your employers, but with a graduate degree, you will get higher chances of employment with much higher salaries. Though not a necessity, getting a degree in related fields will boost your career. Self-teaching and going to college are sometimes viewed as being the same. Nevertheless, an undergraduate degree helps a lot, but an advanced graduate degree will present you with better opportunities.
<urn:uuid:58315800-5f5f-427d-9fb2-2ac9cd6c2c09>
CC-MAIN-2021-21
https://wncgreenbuilding.com/advantages-of-computer-programming-degree/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00591.warc.gz
en
0.958285
656
3.046875
3
1.600621
2
Moderate reasoning
Education & Jobs
The lungs are a pair of spongy organs in the chest and are separated from each other by the heart. The lungs are divided into lobes. The right lung has an upper, a middle, and a lower lobe; and the left lung only has upper and lower lobes. When you breathe in, air flows past your nose and larynx (or voice box) and into the trachea (or windpipe). Just before it reaches the lungs, the trachea divides into two smaller airways called the bronchi, one bronchus for each lung. These airways divide further and further into smaller tubes called bronchioles, which end in the alveoli. The alveoli are microscopic air sacs where oxygen from inhaled air enters the blood; and carbon dioxide leaves the blood and is eventually exhaled. Each lung is enclosed and protected by the pleura—two layers of thin pleural membrane. The pleural space between these two layers contains a small amount of pleural fluid to lubricate the membranes so they can slide easily over each other when you breathe. Below the lungs is the diaphragm, a thin sheet of muscle that helps you breathe. Cells are the building blocks that make up the tissues and organs. Normally, before a cell dies, it makes a new cell to take its place. However, sometimes a cell becomes abnormal and makes many copies of itself. These copies pile up and form a tumour, a lump of abnormal cells. They mimic healthy cells in the body to evade the body’s natural defences. Benign tumours, like moles and warts, cannot invade their surrounding tissue or spread to other locations in the body. Malignant tumours are cancerous. They continue to grow and invade the surrounding tissue. Sometimes cancer cells break away from the tumour and travel to other organs via blood or lymph vessels. This process is called metastasis. Cancer can metastasize to any part of the body; however, cancerous cells from a lung tumour commonly spread to the other lung, lymph nodes, adrenal glands (which are located on top of each kidney), bones, brain, and liver. Cancers are named after the site in which they first develop—in the case of lung cancer, the primary tumour is in the lung. Even when lung cancer spreads to other parts of the body, the diagnosis remains lung cancer; and the cancer that has spread is called a secondary tumour or metastatic lung cancer. Similarly, if cancer from elsewhere spreads to the lungs, it is not referred to as lung cancer, but as lung metastases from the primary site. Cancer Growth and Metastasis Lymph nodes are small, bean-shaped structures located throughout your body that are part of the lymphatic system. The lymphatic system is a network of organs, vessels, and lymph nodes that helps circulate body fluids, and defends the body against microbes and abnormal cells. When there is an infection, injury, or cancer in a part of the body, the lymph nodes in that area get bigger. For example, when you have a cold or a sore throat, the lymph nodes in your neck get swollen. If you visited a doctor with these symptoms, they may have felt your neck to check for these enlarged lymph nodes. Cancer cells from a malignant tumour will sometimes break away and travel through blood or lymph vessels. The cancer cells can lodge themselves in nearby lymph nodes, which normally filter out microbes and abnormal cells. There, they will grow and divide to form a new tumour, which may shed more cancerous cells that can then spread further in the body. The spread of cancer to the lymph nodes is an important factor in determining the extent or stage of that cancer. The number of affected lymph nodes, the amount of cancer in them, and how far they are from the primary tumour are all considered when your doctor creates your treatment plan. The two most common types of lung cancer are non–small cell lung cancer and small cell lung cancer. The words small and non–small refer to the size of the cells found in the tumour and not the size of the tumour itself. Non–small cell lung cancer (NSCLC) is the most common type of lung cancer— around 80% to 85% of all cases. There are three main subtypes of NSCLC. Adenocarcinoma usually starts in mucus-producing glands and is often found in the outer edges of the lungs. It is the most common form of lung cancer in general, as well as in women and non-smokers. Adenocarcinomas may result from known genetic changes that can be treated with targeted therapy. Squamous cell carcinoma (SCC) usually develops in cells lining the bronchi and larger bronchioles, and is often found in the central areas of the lung. SCC is quite common in smokers. Men are more likely to develop squamous cell carcinoma than women. Large cell carcinoma (LCC) can occur anywhere in the lungs but is usually found near the surface and outer edges of the lungs. LCC is the fastest growing subtype of NSCLC and may grow to a very large size before causing any symptoms. Adenocarcinoma and LCC are often referred to as non-squamous lung cancer or non-squamous NSCLC. Small cell lung cancer (SCLC) accounts for about 15% of all lung cancers. These cancers usually develop near the centre of the lungs in the bronchi, and invade nearby tissues and lymph nodes. SCLC is also referred to as oat cell carcinoma because the cancer cells look flat under a microscope. SCLC behaves quite differently from NSCLC and is more aggressive. The cancer cells divide more rapidly to form large tumours that can spread throughout the body before being detected. Other types of cancer affecting the lungs Soft-tissue sarcomas in the lung are rare occurrences. They usually develop in the pleural membranes and grow very slowly. Carcinoid tumours in the lung are rare, slow-growing tumours that arise from hormone-producing cells in the lining of the bronchi and bronchioles. Pleural mesothelioma is a rare type of cancer that starts in the pleural membranes that envelope each lung. It is usually caused by exposure to asbestos. Although technically not a type of lung cancer, pleural mesothelioma is treated by the same specialists who treat lung cancer. For more information on the other types of lung cancer, refer to: The Demographics of Lung Cancer The incidence rate for lung cancer is higher for males than females, although sex specific rates among younger adults appear to be converging In males, the rising incidence rate of lung cancer began to level off in the mid-1980s and has been declining since then Among females, the rate continued to rise and did not level off until 2006 The differences in incidence rates between males and females reflect past differences in tobacco use (in females the drop in smoking occurred approximately 20 years later than it did in males, suggesting that lung cancer incidence rates in females may also begin to decrease in the coming years) Lung cancer stage1 The distribution of stage I lung cancers appeared higher in females (23.7%) than males (17.8%), and the distribution of stage IV lung cancers appeared higher in males (52%) than females (47.1%), but these differences may not be statistically significant There was no obvious pattern in the percent distribution across age groups, except that the percentage of cases where the stage was unknown increased as age increased, from a low of 1.1% among people aged 18–59 years at diagnosis, to a high of 8.8% among people aged 90 years and older at diagnosis. The highest ASIR of stage IV NSCLC was observed for Nova Scotia (38.5 per 100,000) and the lowest in Ontario (25.5 per 100,000). For stage IV SCLC, the highest ASIR was also observed in Nova Scotia (9.0 per 100,000) but the lowest was observed in British Colombia (4.3 per 100,000) Projected New Cases in 20172 1 Canadian Cancer Statistics Advisory Committee. Canadian Cancer Statistics 2018. Toronto, ON: Canadian Cancer Society; 2018. Available at: cancer.ca/Canadian-Cancer-Statistics-2018-EN 2 Canadian Cancer Society’s Advisory Committee on Cancer Statistics. Canadian Cancer Statistics 2017. Toronto, ON: Canadian Cancer Society; 2017. Available at: cancer.ca/Canadian-CancerStatistics-2017-EN.pdf
<urn:uuid:590721a8-ce5f-4b73-b27e-d7a959b6d505>
CC-MAIN-2022-21
https://www.lungcancercanada.ca/Lung-Cancer.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00471.warc.gz
en
0.951886
1,843
3.90625
4
1.985733
2
Moderate reasoning
Health
The National Drowning Report for 2019 presented by Royal Life Saving highlights research and analysis of fatal and non-fatal drowning across Australia between 1st July 2018 and 30th June 2019. During this time, 276 people lost their lives to drowning and they estimate a further 584 people experienced a non-fatal drowning incident. The 31 drownings in swimming pools represent a 16 per cent improvement over the previous period (36 drownings) and a 23 per cent reduction compared to the 10-year average. In the critical 0-4 age group, there was a 12 per cent increase over the previous period (19 drownings compared to 17 previously) but that number was still represented a reduction of 30 per cent over the 10-year average of 27 drownings. This year’s findings show that: • The total number of drowning deaths over the past year increased by 10 per cent on the previous year; • The hottest summer on record led to a 17 per cent increase in summer drowning deaths when compared with the 10-year average; • Rivers accounted for 29 per cent of all drowning deaths, more than any other location; • There was a 39 per cent increase in multiple fatality events, that is multiple people drowning in one incident, compared with the 10-year average; • People aged 45 to 55 years accounted for 15% of the total number of drowning deaths, the most of any age group. This report also shows that drowning deaths in children aged 0-4 years decreased by 30 per cent when compared with the 10-year average, and that children aged 5-14 years remain the lowest age group for drowning (3 per cent of all drowning deaths). Consistently low numbers of drowning deaths in children in recent years are encouraging, showing that Keep Watch messages, which highlight the importance of active supervision, physical barriers to water and water familiarisation, are hitting home and helping to keep children safe. Justin Scarr, Australian CEO of the Royal Life Saving Society says that their work continues to focus on understanding the impact of both fatal and non-fatal drowning. “Through this work, we aim to educate, inform and advocate best practice, working with partners and policy makers, to develop robust national drowning prevention and water safety strategies,” he says. Recent research from Royal Life Saving investigated swimming and water safety skills of children aged two to 15 years old attending lessons at commercial swim schools, outside of school or vacation-based programs across New South Wales, South Australia, Victoria and Queensland between July 2014 and December 2016. This research provided insights into the skills being taught in commercial swim schools and the achievements in relation to the Framework. • 56 per cent of children aged 2 to 4 years attending lessons are living in areas of high socioeconomic status • Children aged 2-4 years make up approximately 25 per cent of children attending private swim schools • Four-year-old children attended an average of 24 lessons, over approximately 5.6 months • Four-year-old children accounted for the highest number of children in lessons • The average age of starting lessons for a four-year-old was 3.3 years of age Swim lesson recommendations: • Advocate for all Australians, regardless of age or background, to access quality swimming and water safety education and increase participation among high-risk populations. • Advocate for investments in swimming and water safety education, including the provision of swimming and water safety lessons, such as school-based and vacation programs. • Raise industry awareness and implementation of the National Swimming and Water Safety Framework, and evaluate impact of the Framework. • Evaluate swimming and water safety programs (including school, vacation and commercial) to ascertain best practice and outcomes for participants. Consolidate terminology when referring to and discussing ‘swimming lessons’, ‘learn to swim’, ‘water safety’, ‘survival skills’ and ‘lifesaving skills’. • Advocate for development and redevelopment of aquatic facilities, and work with industry to improve access for all Australians. • Investigate the effectiveness of drowning prevention, water safety and lifesaving initiatives for teenagers and adults, and how best to increase participation.
<urn:uuid:d23067d4-eca5-47a9-8bdc-504f39e709ab>
CC-MAIN-2019-43
https://www.splashmagazine.com.au/latest-report-shows-23-per-cent-reduction-in-pool-drownings/
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00350.warc.gz
en
0.940881
864
2.59375
3
2.943805
3
Strong reasoning
Health
Low Energy Vast Space Networks (LPWANs) are the fastest growing IoT communication technology and are a key driving force for world IoT connections. With more than a few LPWAN answers and distributors to be had lately, selecting the proper era in your IoT tasks is not any simple process. That will help you make a choice the precise resolution in your particular use case and alertness, listed here are the highest 10 standards you will have to imagine. The significance of industry-grade reliability, particularly in mission-critical applications, can’t be overstated. Prime reception fee and minimum packet loss get rid of the want to resend messages, even in damaging stipulations. This guarantees vital knowledge arrives temporarily whilst lowering continual intake because of multiplicated transmissions. For LPWAN applied sciences operating in the increasingly congested, license-free spectrum, interference resilience is a prerequisite to make sure excessive reliability. An LPWAN’s technical design determines its talent to steer clear of interference or packet collisions when the visitors is excessive, which improves the whole reception fee. A strong era combines numerous flexible approaches like slender bandwidth utilization, brief on-air radio time, frequency hopping tactics and ahead error correction to attenuate competition chance. Message confidentiality, authentication and integrity are core components of community safety. Multi-layer, end-to-end encryption will have to be natively embedded within the community to offer protection to message confidentiality towards eavesdropping and doable breaches. Advanced Encryption Standard (AES) is a light-weight, robust cryptographic set of rules for knowledge encryption in IoT networks. Usually, 128-bit AES can be utilized to determine network-level safety for knowledge communications over the air interface from finish nodes to the bottom station. On the similar time, Transport Layer Security (TLS) protocol for backhaul connection supplies a complementary safety layer to offer protection to IP-based knowledge switch to the cloud. Probably the most safe LPWAN applied sciences additionally incorporate rigorous message authentication mechanisms to substantiate message authenticity and integrity. This guarantees most effective legitimate units can be in contact over your community and messages aren’t tampered or altered right through transmission. three. Community Capability A big community capability permits you to scale together with your rising call for in knowledge acquisition issues with out compromising High quality-of-Carrier. Because the radio vary is nearly an identical throughout LPWAN applied sciences, community capability turns into crucial indicator of infrastructure footprint. The extra finish units and day by day messages a unmarried base station can strengthen, the fewer infrastructure you’ll want. Environment friendly use of the restricted radio spectrum, or spectrum potency, is vital in attaining a big community capability. An ultra-narrowband way with minimum bandwidth utilization supplies an excessively excessive spectrum potency, permitting extra messages to suit into an assigned frequency band with out overlapping each and every different. Concurrently, LPWAN programs using asynchronous conversation want a mitigation scheme to stop packet collisions (i.e. self-interference) because the selection of messages and transmission frequency build up. four. Battery Existence Battery lifestyles has a big have an effect on to your Overall-Value-of-Possession and company sustainability objectives. Despite the fact that LPWAN applied sciences percentage some commonplace approaches to scale back continual intake, battery lifestyles nonetheless very much varies throughout programs. That is attributed to the numerous distinction in on-air radio time—or the true transmission time of a message, which is particularly vital for the reason that transmission is essentially the most power-intensive task. In cellular-based LPWANs, synchronous communications with heavy overheads and handshaking necessities additionally temporarily drain continual. five. Mobility Give a boost to Shifting units, transferring base stations or transferring hindrances alongside the propagation trail are all assets of Doppler shifts and deep fading that result in packet mistakes. LPWAN applied sciences that lack resistance towards Doppler results can most effective strengthen knowledge communications from desk bound or slow-moving finish units. This boundaries their applicability in particular IoT use circumstances comparable to fleet control. In a similar fashion, those networks might fail to attach nodes working in fast-changing environments comparable to a tool put in subsequent to a freeway with cars touring greater than 100 km/hr. 6. Public vs Personal Community When deciding on an LPWAN resolution, you additionally want to imagine which one fits your necessities higher—a public or a non-public community. The largest good thing about public LPWANs run via community operators is infrastructure charge financial savings. On the other hand, public LPWANs imply you’ll be dependent at the supplier’s community footprint which is continuously a long way from world ubiquity. Public LPWANs go away protection gaps in lots of spaces and nodes working on the community edge continuously be afflicted by unreliable connections. Personal networks, alternatively, permit for speedy deployments via finish customers with flexibility in community design and protection according to their very own wishes. Every other main problem of public networks is knowledge privateness worry over the centralized back-end and cloud server. 7. Proprietary vs. Usual Through supporting more than one distributors, industry-standard LPWAN applied sciences with a software-defined way assist steer clear of the issue of supplier lock-in whilst selling long-term interoperability. Adopters, due to this fact, have the versatility to conform to long term technological traits and replacing company wishes. Passing a rigorous analysis procedure, answers standardized and identified via a Requirements Construction Group additionally ship assured credibility and High quality-of-Carrier. eight. Running Frequency Running frequency is some other component to imagine when opting for an LPWAN era, as it may well significantly affect community efficiency. Because of the high-cost barrier of authorized bands, maximum LPWAN distributors leverage license-free commercial, medical and scientific (ISM) frequency bands for sooner era construction and deployment. Whilst there are lots of ISM bands to be had lately, there are some main variations between 2.four GHz band and sub-GHz bands. Usually, LPWAN working at 2.four GHz supplies upper knowledge throughput on the expense of shorter vary and battery lifestyles. On best of that, 2.four GHz radio waves have weaker construction penetration and are uncovered to a lot upper co-channel interference. nine. Information Charges Each and every IoT software has a special knowledge fee requirement which will have to be measured towards the LPWAN answers into consideration. It’s price noting that almost all IoT far off tracking programs are slightly latency tolerant and most effective want to transmit knowledge periodically. As sooner knowledge charges continuously include trade-offs in vary and tool intake, choosing the answer that highest stability those standards will receive advantages your Go back-on-Funding (ROI). 10. Variable Payload Measurement Payload or consumer knowledge dimension will have to be pushed via precise software wishes slightly than fastened via a definite era. LPWAN answers with variable payload dimension permit customers to seamlessly combine new use circumstances into their present community infrastructure—irrespective of the payload requirement. To sum up, the era and technical design in the back of an LPWAN resolution resolve its efficiency within the standards mentioned above. Selecting the best resolution calls for weighing those standards in line with your IoT software necessities and measure other LPWAN choices according to the explained standards.
<urn:uuid:f0ac0865-98c5-4c64-a8fc-4237b6f62ea9>
CC-MAIN-2019-26
https://www.thekillerpunch.com/lpwan-101-10-network-requirements-for-your-iot-deployment/
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00192.warc.gz
en
0.90724
1,514
2.515625
3
2.497035
2
Moderate reasoning
Science & Tech.
Choose the correct answer for each question. Every country has its native instruments which capture the mood and spirit of its culture. For the Japanese it may be the koto and shamisen, for Indians it may be the sitar and vina. For Americans there are three instruments which reflect the mood of the country and can be called typically American ones although their origins, like most things American, may be elsewhere. The first of these instruments is the banjo a simple four stringed instrument. This stringed musical instrument originally came from Africa and was most probably brought over by the black slaves in the early nineteenth century. After working all day in the cotton fields the black slaves would relax in the evening under the shade of plantation trees and sing simple songs of their native lands. They would accompany themselves on simple banjos evocative of the spirit of their homelands. Later after the Civil War banjos were widely played in minstrel shows throughout the South featuring folk music and jazz ensembles. It has a crude sound when plucked and, although it resembles the guitar, its sound was not as mellow and its range was not as wide. Yet, when played well, it creates a distinct atmosphere which evokes the feeling of life on the early American plantation. A second instrument associated with America is the harmonica. Sometimes called a mouth organ it is a simple reed instrument which can easily be held in one hand. Originally the first harmonicas were made in Germany, but the early pioneers brought this instrument with them from their homeland when they came to America. They would play in the evenings while passing the night under the stars. On the lonely prairie after a long day's work the sound of the harmonica is especially melancholic. If the banjo has a jittery sound, then the harmonica has a distinctly melancholic one. It is the sound of a sad, nostalgic lament. It is the sound of someone yearning for his home or wanting to return to the lost experiences of happier days. When properly played it captures the mood of the vast frontier especially in the west where the cult of the cowboy dominated the wilderness. The third instrument associated with America is the guitar. Originally it is an instrument of European origin, most likely from Spain. Yet, the guitar also played an important part in the American frontier. After a hard day's work rounding up and branding cattle, cowboys would sit around an open campfire and sing songs of love and nature while strumming on the guitar. Today the banjo and harmonica may not be as popular as they once were but the guitar is still very much alive. The revival of folk music back in the 1960s brought guitars to college campuses and there is hardly a rock band today which does not feature an electrical guitarist as part of the ensemble.
<urn:uuid:6032c383-73de-4a9b-8788-e805d31156ae>
CC-MAIN-2015-18
http://webspace.webring.com/people/bc/call4allus/banjos.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651873.94/warc/CC-MAIN-20150417045731-00014-ip-10-235-10-82.ec2.internal.warc.gz
en
0.974267
578
3.40625
3
2.329412
2
Moderate reasoning
Entertainment
Who Were the Original Letcher Riflemen? The unit known as Company H, the 2nd Virginia Volunteer Infantry, The Letcher Riflemen was formed in April 1861 in the community of Duffields in what is now the State of West Virginia. Back then, the area was still part of Virginia and was staunchly Pro-Rebellion in its sentiments. The Duffields community lies in the Lower or Northern end of the Shenandoah Valley.The members of the company came from the towns and farms all over the lower valley in the area around Shepherdstown, Harpers Ferry and up into Sharpsburg Maryland Why the name Letcher Riflemen? The company was formed on the nucleus of the Pre-War militia company also known as the Letcher Riflemen. The name derives from the surname of the popular fire-eating ante-bellum Governor of Virginia, John Letcher (1813-1884). The Letcher Riflemen were mostly Farmers but all walks of life were represented within the Company. In the Company, the two Miller boys were both shoemakers, as was Captain Jenkins, the last commander of the company when it surrendered at Appomattox Court House in 1865. Men from the Cousinwealth The Lower Valley area was jokingly known as a Cousinwealthand the demographic of the Letcher Riflemen reflected the strong family links within the valley. Over 20 families had more than one representative serve in the company during the course of the war. 5 of these families contributed 3 males to the company. The Link family had four of its male members serve. The Men of Manassas When the Company was mustered into Confederate service in 1861, it was allocated to the 2nd Regiment of the 1st Virginia Brigade, The Brigade soon won immortality at Manassas (Bull Run) under its first commander Thomas J Jackson, and both the Man and the Brigade were forever linked by being named Stonewall A Small Company The Letcher Riflemen were always the smallest company in the Second Regiment of the Stonewall Brigade. The Company mustered no more than16 members in the year between April 1863 and April1864. Numbers declined dramatically after the bulk of the Stonewall Brigade (along with the 2nd Regiments Battle Flag) was captured in the Mule Shoe salient at the battle of Spotsylvania in 1864. The Letcher Riflemen surrendered at Appomattox with a pitiful total of only 10 men. Indeed the whole Second Regiment only numbered 69 members when it surrendered. Interestingly, 2 members of the Company who surrendered at Appomattox made it through the whole course of the war from day one without a single incident of injury or illness. Captain Joseph Jenkins was one of those men. At the surrender, he was in command of the 2nd Regiment. Absence without leave was a huge problem in the Company as it was throughout the entire Stonewall Brigade.Although Company H had the lowest rate of unauthorised absence in the 2nd VA; at least 8% of the company were guilty of this offence at some time throughout their service.This was in part due to the fact that the Brigade was often engaged or camped in the Shenandoah Valley many times during the war. The temptation to tend to family and crops waiting in close proximity must have been strong. Not less than 36 of the company members have the offence of AWOL recorded in their record. |The First Colonel of the 2nd Va. Infantry Regt. | James Walkinson Allen
<urn:uuid:ae84d646-b796-40a9-ad5e-fb19ef0d1932>
CC-MAIN-2019-13
http://2ndvirginiacsa.tripod.com/id3.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201922.85/warc/CC-MAIN-20190319073140-20190319095140-00378.warc.gz
en
0.974137
755
2.859375
3
1.937056
2
Moderate reasoning
History
Stroke is a leading cause of long-term severe disability1 amongst adults and the fifth leading cause of death in the United States. Approximately 6.6 million Americans 20 years of age or older experience a stroke, affecting approximately 795,000 people each year. It is estimated that 87 percent of strokes are ischemic, and 25 to 40 percent of these occurrences are deemed cryptogenic, or a stroke of unknown cause.2 A large portion of these cryptogenic patients may have asymptomatic or undiagnosed atrial fibrillation (AF), and yet many patients do not receive additional cardiac monitoring following their initial stroke hospitalization. It is important to determine stroke etiology so that these patients are not at risk for additional strokes or even death. Article At a Glance At the Cone Health Stroke Center in Greensboro, NC, the stroke neurology and electrophysiology departments creatd a Cryptogenic Stroke Protocol. Over 12 months, 18 patients (18.9 percent) developed AF. The median time from index event to device insertion was six days, with a mean time from implantation to AF detection of 50.875 days. As a result, antiplatelet treatment was changed to anticoagulation in 17 of 18 patients within 3.8 +/- 2.9 days of AF detection. Establishing stroke etiology is imperative in order to prevent subsequent events. At Cone Health Stroke Center in Greensboro, NC we developed a Cryptogenic Stroke Pathway that brings together an interdisciplinary team of neurologists, electrophysiologists, cardiologists, and stroke care health professionals to help provide more rigorous and integrated care with consistent follow-up and treatment so that every stroke patient receives the best possible outcome. Atrial Fibrillation, Cryptogenic Stroke, and Long-Term Cardiac Monitoring Atrial fibrillation is one of the most common and undertreated heart rhythm disorders in America and a major risk factor for stroke. It is important to determine if AF causes the stroke because oral anticoagulant treatment can help reduce or eliminate the condition, offering a 65 percent relative risk reduction for stroke.3 The likelihood of developing AF increases with age and with additional risk factors, including hypertension, diabetes, cardiac disease, sleep apnea, and alcohol abuse. Once diagnosed with AF, a patient is three to five times more likely to have a stroke.4 An AF-related ischemic stroke is two times more likely to be fatal than a non-AF stroke.5 • Atrial fibrillation detection in cryptogenic stroke patients increases over time with prolonged monitoring. • An atrial fibrillation diagnosis can alter treatment and will prevent recurrent stroke. • Electrophysiology, cardiology, and neurology involvement in process development ensures a high rate of atrial fibrillation detection. • A Cryptogenic Stroke Pathway leads to provision of exemplary stroke care. Diagnosing AF can be challenging because episodes can be asymptomatic and intermittent.5 Historically, holter monitors and short-term event monitors (worn for less than 30 days) have been the only way to monitor patients for AF after discharge following a cryptogenic stroke.6 In these cases, patients would be admitted with stroke symptoms, and then would be diagnosed with an acute ischemic stroke. There would be suspicion of an embolic source based on stroke location or history, and an exhaustive workup would be completed, including a transesophageal echocardiogram (TEE) to look for the cardioembolic source. At discharge, the patient would be scheduled to receive a cardiac monitor with the option to receive it in the mail or during a subsequent in-office visit for placement and instruction. In these instances, a high percentage of patients would refuse monitoring due to a high copay or would not show for their follow-up appointment because they were feeling better. As a result, further cardiac evaluation for AF as the etiology of the stroke was often lost. For patients who used a short-term monitor as instructed, AF would still not have been detected in many patients as the median time to AF detection in cryptogenic stroke patients is 84 days.7 As evidenced in CRYSTAL AF trial, continuous monitoring beyond 30 days with the Reveal Insertable Cardiac Monitor is superior to standard medical care for AF detection in patients with cryptogenic stroke.8 Key findings included: • 7.3 percent more patients with AF detected at one year than standard medical care • 30 percent more AF detected at three years versus three percent for standard medical care • 97 percent of patients with AF were prescribed oral anticoagulants. The clinical impact of these findings is that short-term monitoring is not sufficient for AF detection in cryptogenic stroke patients; however, AF detection increased overtime with long-term cardiac monitoring. Thus, to increase the chance of detecting AF, it is recommended that an insertable cardiac monitor be implanted to continuously monitor the heart’s rhythm for up to three years. Cone Health Experience The stroke neurology and electrophysiology departments worked together to create a Cryptogenic Stroke Protocol in which the Reveal LINQ Insertable Cardiac Monitor was implanted in 95 patients from March 2014 through February 2015. Of the patients observed, there was a significant increase in AF detection beyond 30 days of monitoring, which led to a change from antiplatelet therapy to anticoagulation in most patients, thereby increasing secondary stroke prevention and decreasing stroke recurrence: • Average age was 68.3 years with 57 percent of patients female and an average CHA2DS2-VASc score of 4. • 18 patients (18.9 percent) were diagnosed with AF, and antiplatelet treatment was changed to anticoagulation in 17 of 18 patients within 3.8 +/- 2.9 days. • Of the patients with an AF detection, all experienced episodes that lasted six minutes or more on one day. • Median time from index event to device insertion was six days, with a mean time from implantation to AF detection of 50.875 days. • Comparing CRYSTAL AF results with our facility results, we found more AF during the same period of time. Why Establish a Cryptogenic Stroke Pathway? Survivors of transient ischemic attack (TIA) or stroke are at an increased risk for another stroke. As caregivers, our goal is to reduce the risk of recurrent stroke. Establishing a Cryptogenic Stroke Pathway can help to detect and treat AF in order to significantly reduce a patient’s risk for another stroke, and also bring together all healthcare professionals involved to ensure a thorough standard of care. Through a Cryptogenic Stroke Pathway, hospitals can provide an underserved patient population a better risk reduction strategy to prevent a secondary stroke, establish cross-functional healthcare professional relationships to ensure integrated care delivery, ensure multidisciplinary stroke care involving both neurological and cardiovascular care, and enhance a hospital’s reputation in providing exemplary stroke care. To define our cryptogenic stroke patients, we used criteria similar to those used for the clinical definition in the RESPECT-ESUS Study. 9Inclusion Criteria include: • 60 years or 50-59 plus at least one additional risk factor for stroke • Acute ischemic non-lacunar stroke seen on MRI or CT. • Lacunar stroke 1.5 cm • Arterial or cervical imaging extra-cranial or intracranial atherosclerosis without luminal stenosis greater than 50 percent • AF not seen on telemetry. Exclusion Criteria include: • mRS 4 at time of stroke or inability to swallow medications • Major risk cardioembolic source of embolism • Other indication for anticoagulation • History of AF • Other specific stroke etiology • Primary intracerebral hemorrhage • Other conditions associated with increased risk of bleeding • History of symptomatic nontraumatic intracranial hemorrhage • Renal impairment with CrCl 30 • Hypersensitivity or known contraindication to aspirin or anticoagulants. Once the Stroke Team ensures the stroke work-up is complete and deems the patient to have had a cryptogenic stroke, the following processes occur: • Cardiology performs the TEE • Electrophysiology is consulted for patient assessment prior to Reveal LINQ insertion • If the TEE is unrevealing for embolic source, electrophysiology then inserts the Reveal LINQ and instructs the patient and family with educational follow up by a registered nurse and device representative • The patient is discharged home with a patient monitor, and a planned wound follow-up in the device clinic is scheduled 10 days after inpatient implantation • During follow-up, device transmissions are verified. Daily monitoring is performed by the device clinic • The physician is notified within four hours of AF detection. The stroke neurology team advises on the preferred oral anticoagulant prior to discharge; and anticoagulation is prescribed and monitored by the cardiology team • The cryptogenic stroke care team convenes on a regular basis to assess processes, and revises as indicated. Putting a Cryptogenic Stroke Pathway into Practice Prior to developing a Cryptogenic Stroke Pathway, one important step to take is to determine how cryptogenic stroke is defined in your hospital; then, consider these key insights to help ensure success in your pathway program: • Identify Key Players in the Care Continuum—Determine the stroke champions and align on the multidisciplinary team. Your stroke team or the primary service caring for stroke patients is the first place to start. • Protocol Alignment—Make sure all key stakeholders agree on a pathway and education plan. Key questions to align on can include: If there is a stroke protocol, do you currently have a Cryptogenic Stroke Pathway/protocol? What do you do if stroke etiology is not found? Is it included in your treatment algorithm? Then, what is the next step: Do you pursue a further work-up in the hospital or in the outpatient setting? • Establish a plan for transition of care and long-term follow-up—This can be achieved by: determining who will own and drive the process (e.g., stroke team, neurology, cardiology); assessing if your hospital has the infrastructure and staff to support the program, and other development needs or logistics to support your plan; then, identify if this could be a quality improvement project or a process improvement project. • Measuring Success—Align on key metrics to measure success and improvement in your pathway. For example, are there any data you would like to collect? What is your definition of success? Once a patient is deemed cryptogenic, what is the multi-disciplinary pathway, transition of care and follow-up plan in place for that patient? It is vital that neurologists, electrophysiologists, and cardiologists work together in managing, diagnosing, and treating cryptogenic stroke patients. A vascular trained neurology staff helps to ensure appropriate patient selection for insertable cardiac monitors in high-risk patients. Based on our findings, we recommend clinicians implement a Cryptogenic Stroke Pathway at their hospital to help reduce recurrent strokes and improve outcomes for their stroke patients. Pramod Sethi, MD is Medical Director of the Cone Health Stroke Center in Greensboro, NC. 1. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics—2016 update: A report from the American Heart Association. Circulation. 2016; 133: e204-e234 2. Adams HP, Bendixen BH, Kappelle LJ, et al. Classification of subtype of acute ischemic stroke: Definitions for use in a multicenter clinical trial. TOAST. Trial of Org 10172 in acute stroke treatment. Stroke. Jan 1993; 24: 35-41. 3. Stroke Prevention in Atrial Fibrillation Study. Circulation. 1991; 84: 527-539. 4. Wolf PA, Abbott RD, Kannell WB. Atrial fibrillation as an independent risk factor for stroke: the Framingham Study. Stroke. 1991; 22: 983-988. 5. Lin HJ, Wolf PA, Kelly-Hayes m, et al. Stroke severity in atrial fibrillation: The Framingham Study. Stroke. 1996; 27: 1760-1764. 6. Jabaudon et al, Usefulness of ambulatory 7-day ECG monitoring for the detection of atrial fibrillation and flutter after acute stroke and transient ischemic attack. Stroke. 2004; 35: 1647-51. 7. Kamel. Detection of atrial fibrillation and secondary stroke prevention using telemetry and ambulatory cardiac monitoring. Curr Atheroscler. Rep, 2011; 13: 338-343. 8. Sanna T, et al. Cryptogenic Stroke and Underlying Atrial Fibrillation (CRYSTAL AF). NEJM. 2014; 370(26): 2478-2486. 9. Boehringer Ingelheim. Dabigatran Etexilate for Secondary Stroke Prevention in Patients with Embolic Stroke of Undetermined Source (RE-SPECT ESUS) https://clinicaltrials.gov/ct2/show/study/NCT02239120. September 10, 2014. Accessed November 3, 2016.
<urn:uuid:8deb9e01-4970-4d3a-b8d0-81a4dafbbd45>
CC-MAIN-2020-29
https://practicalneurology.com/articles/2017-jan-feb/cryptogenic-stroke--atrial-fibrillation-establishing-the-link-through-team-approaches-to-care
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00453.warc.gz
en
0.911893
2,834
2.8125
3
2.935584
3
Strong reasoning
Health
This retelling in prose of twenty of Shakespeare's thirty-seven plays was originally published just for children. Keeping Shakespeare's own words whenever possible but making the plots and language easily understandable, this very listenable collection has entertained and informed generations of adults as well. With such classic stories as A Midsummer Night's Dream, Much Ado about Nothing, Hamlet, and more, Shakespeare's most memorable characters come to life anew as magicians and fairies, fools and kings weave their magic, mischief, and madness. The list of 20 plays includes: The Tempest, A Midsummer Night's Dream, The Winter's Tale, Much Ado about Nothing, As You Like It, The Two Gentlemen of Verona, The Merchant of Venice, Cymbeline, King Lear, Macbeth, All's Well That End Well, The Taming of the Shrew, The Comedy of Errors, Measure for Measure, Twelfth Night (or-What You Will), Timon of Athens, Romeo and Juliet, Hamlet (Prince of Denmark), Othello, Pericles (Prince of Tyre.) "synopsis" may belong to another edition of this title. Founded in 1906 by J.M. Dent, the Everyman Library has always tried to make the best books ever written available to the greatest number of people at the lowest possible price. Unique editorial features that help Everyman Paperback Classics stand out from the crowd include: a leading scholar or literary critic's introduction to the text, a biography of the author, a chronology of her or his life and times, a historical selection of criticism, and a concise plot summary. All books published since 1993 have also been completely restyled: all type has been reset, to offer a clarity and ease of reading unique among editions of the classics; a vibrant, full-color cover design now complements these great texts with beautiful contemporary works of art. But the best feature must be Everyman's uniquely low price. Each Everyman title offers these extensive materials at a price that competes with the most inexpensive editions on the market-but Everyman Paperbacks have durable binding, quality paper, and the highest editorial and scholarly standards.From the Inside Flap: In the twenty tales told in this book, Charles & Mary Lamb succeeded in paraphrasing the language of truly adult literature in children's terms. Let us not underestimate young readers: they love a complex story with many and varied characters, twists of plot, and turns of fate as much as anyone — but they draw the line at reading in unfamiliar language. The Lambs provide a real feast of plain fare, and flavor it with as many tasty tidbits of Shakespearean language as they felt the young reader could easily digest. This deluxe Children's Classic edition is produced with high-quality, leatherlike binding with gold stamping, full-color covers, colored endpapers with a book nameplate. Some of the other titles in this series include: Anne of Green Gables, Black Beauty, King Arthur and His Knights, Little Women, and Treasure Island. "About this title" may belong to another edition of this title. Book Description J M Dent & Sons Ltd, 1977. Hardcover. Book Condition: New. Never used!. Bookseller Inventory # P110460050397
<urn:uuid:1f75270a-9225-4628-875a-696ad9f638b2>
CC-MAIN-2017-51
https://www.abebooks.com/9780460050395/Tales-Shakespeare-Childrens-Illustrated-Classics-0460050397/plp
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588072.75/warc/CC-MAIN-20171216123525-20171216145525-00585.warc.gz
en
0.923905
676
2.546875
3
2.208801
2
Moderate reasoning
Literature
News Release, Kansas Geological Survey, Nov. 1, 1995 Survey geologist Pieter Berendsen, who has studied the relationship between faulting and the landscape, will present results of his work at the annual meeting of the Geological Society of America in New Orleans on Nov. 7, 1995. For several years, Berendsen has mapped the midcontinent's faults--underground locations where layers of rock have moved in relation to each other. Many of those faults first moved hundreds of millions of years ago, and probably produced earthquakes at the time. However, more recent movements along those same faults, or reactivation, may have taken place during the past several million years, says Berendsen. And that may have affected the landscape above the subsurface. For example, in the Flint Hills of Chase County, Kansas, Berendsen has used data from drilling of oil and gas wells to map underground faults that are part of a well-known fault zone, called the Humboldt Fault Zone, that stretches from northern Kansas to southern Kansas. The Humboldt Fault Zone occasionally produces small earthquakes and geologists say it will continue to produce tremors, though most are too small for people to feel. Berendsen believes the topography of the ground above those faulted areas may have been directly influenced by small movements in the faults. In an area southwest of Cottonwood Falls, a large block of land has been lifted up by faulting, exposing the surface to erosion and creating a landscape that became hilly as it was dissected by erosion. In other places, blocks of land were downdropped, and thus more protected from erosion, resulting in a flatter landscape. "These contrasts in topography match up with areas where there has been faulting in the subsurface," said Berendsen. The original faults in the area's sedimentary rocks probably occurred shortly after the rocks were deposited, about 500 million years ago. The reactivation of the faults probably occurred in the past several million years. Those faults may also have influenced the course of rivers in the state. For example, where blocks of land have been lifted up by faulting, rivers may flow along the edges of those blocks. Parts of the course of the Cottonwood River in Chase County, for example, follows the edge of one of those faulted blocks that Berendsen had identified. "Knowing the locations of those old faults, and their reactivation, has important ramifications for many engineering projects," said Berendsen. "This includes dams, power plants, waste-disposal sites, and other building projects. By learning where those old faults have been reactivated, we learn more about the best locations for many facilities."
<urn:uuid:4db39706-01e9-455c-977e-ad8f5bba918a>
CC-MAIN-2014-52
http://www.kgs.ku.edu/General/News/95_releases/pieter.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769373.55/warc/CC-MAIN-20141217075249-00045-ip-10-231-17-201.ec2.internal.warc.gz
en
0.969486
560
3.5625
4
3.061816
3
Strong reasoning
Science & Tech.
Whales often have this captivating facade that often leaves humans in awe of who they are. Majestic and elusive, it makes us wonder if there’s more to whales than just their appearances and migration patterns. It is easy to assume that the human race is the brightest and the best of all living creatures, but this is a notion that can easily be contested. As much as we display our intelligence, dominance, and material accomplishments, there are so many things we can still learn from the natural world. Not only can we learn from the kindness of dogs, the resilience of trees, or the faithfulness of penguins, but we can also learn from whales. Take a look at these characteristics of whales that we should all strive to emulate. Going with The Flow Have you heard of the term “hustle culture”? It is the idea that people should work hard, gulping down ounces of coffee and waking up at 4 am to reach their goals. The hustle culture believes that breaking one’s back is the only way to be successful. Sadly, this kind of thinking can be toxic, as well as causes burnout and mental breakdowns. Whales, on the other hand, go with the flow instead of fighting the current. Their migration patterns are predicted by where the waters are warm. Sometimes, allowing yourself time to breathe, going with the flow, and taking a break is what you need to be in a warm, comfortable place. This is one lesson we can learn from whales. Don’t Forget to Breathe Isn’t it interesting that despite living in the water, whales are mammals? This means that they come up periodically out of the water to breathe. This is somehow related to the first lesson we can learn from whales. When we’re so obsessed with overthinking things or draining ourselves with the stresses of the day, we end up inflating the extent of our worries–causing mental and physical health consequences. The truth is, taking time to breathe has its benefits. Deep breathing is a relaxation technique that many healthcare practitioners recommend in clinics, hospitals, and alcohol rehab facilities. It improves blood flow, reduces inflammation, and rids the body of toxins. If you’re feeling overwhelmed, remember how whales take time to breathe. Stimulate Your Mind According to Whale and Dolphin Conservation, the behavior of these sea mammals suggest that they have bright minds. For one, they use echolocation, or the use of sound to locate a fellow-creature. Whales also have a part of their brain which is capable of processing emotions, similar to humans. These lovely mammals are also capable of assigning fellow-creatures distinct names, which are identifiable by certain sound markers. In essence, whales constantly use their minds to solve problems and make things easier! This is another lesson we can learn from whales. Any machinery, including the brain, when left idle will start to rust. So, continue to find activities that stimulate your brain such as pursuing hobbies, reading, using problem-solving skills, and even playing games. Valuing Friends and Family How many times have you heard song lyrics that echo, “I can make it on my own!”. Unfortunately, this sentiment can only be achieved in a metaphorical way. Nobody “makes it” fully on their own. We need people to support us and help us in times of need. Whales travel together in groups called “pods”. This way, they can look out for each other and make sure that they all get on their destination without being lost. Valuing the friends and family who support us is essential to any journey we are taking. When we are tempted to seek out life on our own, may we be reminded of this common animal instinct that no man, nor animal, is truly an island. The sophistication of human technology, although with many benefits, also caused a lot of problems that damage or psyche. Popular media, for one, causes problems in our self-esteem, making us wonder if we should look or act a certain way to be accepted. This isn’t a concern for whales. Even if they stick in groups, they express themselves in their unique ways by singing a specific song. This song is tied to their identity–no other whale has this assigned tune. Thinking of it in human terms, to be less concerned about the opinions of others is a liberating thought. It can give you the opportunity to be your true self–a situation that can bring joy and peace of mind. Feeling Down? Think of the Whales Truly, there’s a purpose why we coexist with whales and the other wonderful creatures on earth. There’s so much to learn not just about them, but from them. Through these simple observations of their behaviour, may learn to swim through life more smoothly. - Timesofindia.indiatimes.com – “Health Benefits of Deep Breathing Exercise”. - Tcnjsignal.net – “Hustle culture is toxic to mental health”. - Us.whales.org – “How intelligent are whales and dolphins?”. - Ed.ted.com – “Why do whales sing?”.
<urn:uuid:8e584b78-420b-443c-9746-ac781a505b46>
CC-MAIN-2023-23
https://www.whalehouse.ca/blog/what-can-humans-learn-from-whales/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656788.77/warc/CC-MAIN-20230609164851-20230609194851-00760.warc.gz
en
0.948146
1,108
3.0625
3
2.618484
3
Strong reasoning
Health
Automated subway trains, talking trashcans and sensors showing available parking spaces: Smart city schemes are designed to make everyday life in urban areas easier. Rio de Janeiro is using high-tech systems to prepare for the Olympic games. We visit the city’s digital hub An alarm is sounding in the “centro de operações,” prompting hectic activity in Rio de Janeiro’s unofficial HQ. A downtown office block has caught fire. While the emergency services are still initiating rescue measures, the traffic experts calculate the best detours and send traffic alerts to digital roadside panels located along the city’s arterial roads. At the same time, they also inform taxi companies and citizens via text messages and social media. Radio reporters in an adjacent room interrupt their stations’ programing to update listeners on the traffic situation. The command center is located behind a mirrored façade on the edge of Rio’s downtown area. With its 50-meter-wide wall of screens, the main operations room is reminiscent of a futuristic hub. The screens show images of pinch points throughout the city: roads, tunnels and bridges that are used by cars at all hours. A digital map of Rio, on which traffic jams are shown in red, purple and yellow, forms the centerpiece. Four hundred employees control the city’s activities around the clock, seven days a week. They analyze the data supplied to their computers by 900 cameras and more than 10,000 sensors. This enables them to locate garbage trucks and – if necessary – send them to places where garbage is piling up. Broken-down trains, blackouts, interruptions to the water supply and problems at sewage treatment plants – all these are immediately reported to Rio’s control center, which quickly tries to find solutions. “We are the eyes and ears of Rio de Janeiro,” says Pedro Junqueira, the control center manager. The smooth flow and processing of information in real-time form the core of what urban planning authorities and IT companies mean by “smart city.” It is a bold definition, because the term, which has been popular in urban planning circles for around ten years, is as multi-layered as the problems the world’s metropolises wrestle with. However, some issues are almost universal: traffic jams, garbage mountains, parking problems. Other difficulties include safeguarding the water supply, security and the efficient use of energy. IT and digital networks are increasingly helping to create safe, clean and well-organized municipalities. Civic leaders’ biggest nightmares have created a future, and guaranteed, growth market for the IT industry. Today, more than half the world’s population lives in cities, and around the world 180,000 people migrate to metropolises every day. According to U.N. projections, in 35 years around 70 percent of the world’s population will live in urban areas. Companies like IBM, Siemens, Cisco and, most recently, Google, are competing for the profits generated by these developments. In the worst-case scenario, an entire city could be switched off with a single mouse click In the worst-case scenario, an entire city could be switched off with a single mouse click The control center in Rio de Janeiro was conceived by IBM. Shortly before the mayor, Eduardo Paes, awarded the contract in 2010, the city was struck by a catastrophe. Torrential rains had caused devastating landslides. Many houses were destroyed and hundreds of people died. The authorities were accused of having failed their duties. The argument was that there should have been an appropriate early warning system in place, a properly functioning sewer system and more sturdily constructed houses – especially in the favelas, all of which would have greatly reduced the levels of devastation. While the city is proud of its spectacular mountains and vast bays, its infrastructure is thoroughly dilapidated. Paes was certain that as host of the 2014 Soccer World Cup and this summer’s Olympic games, Rio was in great need of improvement, but time and money were scarce. The “centro de operações” represented the best chance of redesigning at least some aspects of life in the city more efficiently. Siemens is one of the world’s largest suppliers of smart city solutions, focusing on traffic management. Martin Powell, who heads up the company’s urban development division, was previously employed as an environmental consultant by the Greater London Authority. At the time, Siemens built London’s toll system. Today, this system automatically records, and collects payments from, all vehicles in central London. At the same time the company improved the city’s subways. An automated control system now regulates the frequency of train services. This has increased capacity by a third. The result: car traffic decreased by 20 percent, leading to a reduction in air pollution and traffic jams. In Berlin, Siemens built a sensor and camera-supported congestion reporting system. The potential savings are enormous: In Germany alone, vehicles stuck in traffic consume almost 30 million liters of gasoline a day. Lastly, in a pilot project, Siemens installed radar sensors on some of Berlin’s streetlights, which transmit information about available parking spaces to car drivers via an app. Barcelona, New York, Tokyo or Copenhagen – today, there are thousands of smart city schemes all over the world. Entire neighborhoods are designed to be energy efficient and are equipped with high-tech solutions right from the start. Like Masdar City, the solar-powered, “carbon-neutral city of science” in the desert outside Abu Dhabi. Or the Songdo neighborhood in South Korea, built only eleven years ago and designed entirely on the drawing board. Songdo is located on a man-made island near Seoul and is currently home to 40,000 people. It is also Cisco’s showcase project. Cameras peer into almost every corner. Apartments, offices, hospitals and schools in Songdo are fully networked and can be controlled with the aid of mobile communications. A further Cisco project is being tested in Barcelona: Thousands of motion detectors have been installed on streetlights, in trashcans and underneath the surface of the road. Traffic lights turn green immediately when a fire truck approaches, full trashcans send a signal directly to garbage trucks. IT giant Google has also entered the networked city business. Sidewalk Labs, the Google subsidiary founded for this purpose, is headed up by CEO Daniel Doctoroff. Doctoroff served as the deputy mayor of New York City under Michael Bloomberg, and was responsible for the city’s infrastructure and economic development. Sidewalk Labs has announced that its aim is to supply all urban areas around the world with free Internet connections. The company started implementing its plans in New York City, where telephone booths are gradually being replaced by powerful WiFi hotspots. In the long run, Google is planning to build a system of automated buses, trains and self-driving cars. Despite the obvious advantages of these smart city schemes, they have not infrequently been criticized. Data protection campaigners warn of the risk of total surveillance. And there is an even more sensitive issue: The complexity of the applications and the centralized nature of the networks make “smart cities” more vulnerable to system failures, and to attacks by hackers and terrorists. “Highly centralized systems are always of interest to potential attackers. In the worst-case scenario, you could disable an entire city with a single mouse click,” warns Sandro Gaycken, a cyber security expert and Director of the Digital Society Institute at the European School for Management and Technology in Berlin. The industry knows this risk, and counters the threat with firewalls and emergency backups. It also advertises its successes. Siemens manager Powell emphasizes the positive effects of smart technology. “In London, cameras were installed everywhere. This has caused a marked decrease in the crime rate,” he says. “In addition to this, traffic moves more quickly, which has caused a reduction in air pollution.” We are the eyes and ears of Rio de Janeiro So what were the experiences in Rio de Janeiro? “We cannot eliminate all the problems at once,” says Junqueira, of the city’s high-tech control center. He does, however, regard the 2014 Soccer World Cup as having been a successful dress rehearsal for the Olympics. Of course, there were traffic jams when large numbers of people flocked to the Maracanã stadium or to the public viewing zone along Copacabana beach. But despite hundreds of thousands of visitors, even locals were astonished that the traffic chaos was not more severe than normal. Junqueira is especially proud of his latest early-warning system. A highly sensitive weather radar system located on the city’s hills can spot rainfall as far as 350 kilometers away. Powerful computers then predict which neighborhoods are threatened by severe weather. At-risk roads can thus be closed, and inhabitants alerted to head for the shelters by text message. Most recently and in partnership with the Japanese technology group NEC, Rio tested sensor-based technologies including a smartphone app designed to predict landslides. This represents another lasting benefit for the city, one that will remain after the Olympic fire has gone out.
<urn:uuid:ceb71095-0b27-48c7-8525-e182088adb00>
CC-MAIN-2017-34
http://magazin.lufthansa.com/uk/en/travel-en/digital-heart-rio-de-janeiro/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105195.16/warc/CC-MAIN-20170818233221-20170819013221-00404.warc.gz
en
0.961882
1,916
2.6875
3
2.857819
3
Strong reasoning
Science & Tech.
Have you ever noticed how a certain scent can conjure up a lovely memory or a beautiful moment? It’s a feeling that can transport you to a different time and place, and there’s a good reason for this. Namely, scent is strongly connected to our emotions, and you can use this fact to your advantage. If you need help concentrating at work, or if you’re experiencing sleepless nights—scent has the power to help. The influence of fragrance is bigger than you think and can positively affect body and soul. Our sense of smell is by far not as good as that of animals—after all, a dog has 200 million olfactory receptors while a human has “only” 5 million—however, scent does have a powerful effect on us. This is most likely an evolutionary remnant, developed during a time when scent played an important role in survival. Primitive tribes depended on their sense of smell to warn them about enemies and to find food. SCENT AND EMOTIONS Even today, scent plays a role in survival. Think about gas leaks: if you smell one on time, it can be the difference between life and death. And perhaps you think you’ve chosen the love of your life based on looks, but in reality, it was probably pheromones that attracted you. This is because scent has such a tremendous impact on falling in love and on our emotions in general. The nose is the only organ that has an almost direct influence on our emotions because it is closely linked to the amygdala, otherwise known as the emotional center in our brain. Scent is also strongly linked to memory. We all know the feeling that can suddenly overwhelm you when, for example, you smell a certain sunscreen that reminds you of that one summer holiday. Or perhaps you get a whiff of the perfume your mother always wears and are instantly transported back to your childhood home. The French writer Marcel Proust described this effect as far back as 100 years ago in his book À la recherche du temps perdu, translated into English as In Search of Lost Time. This is why the concept of scent influencing your mood is also sometimes referred to as the Proust effect. Your subconscious builds a personal scent database that connects a certain scent to a specific memory, which stirs up an emotion. When you smell this scent, your memory recognition is activated. The feeling that is triggered by a scent and the memories they conjure can also lead to a certain behavior. ADD THESE SCENTS TO YOUR DAILY ROUTINE AND REAP THE BENEFITS With this knowledge, you can use scent to your advantage and incorporate it into your daily life. Are you stressed and needing more peace? Lavender could be the answer, because this purple plant is known for relaxing muscles and nerves, and therefore has an overall calming and restful effect. Lavender could be considered a miracle plant, because its scent appears to help alleviate insomnia and pain, while also releasing “feel good” hormones throughout your bloodstream which make you feel happier. Could you use help concentrating at work? Time to give peppermint oil a try. Even if you’re rushing to the gym after a long workday, peppermint is a good choice, because research suggests that it can positively influence your performance. And this fact may be the most useful: peppermint can work wonders on hangovers. Eucalyptus is another scent that can help improve concentration, and is even a well-known home, garden and kitchen solution to help you breathe better when you have a bad cold. Do you suffer from nightmares? Then the scent of flowers is worth a try because research has indicated that it can cause more positive dreams. CREATE A RELAXING ATMOSPHERE IN YOUR HOME An innovative way to integrate different fragrances into your home can be found in the Rituals Perfume Genie: a stylish, wireless, and ingenious perfume diffuser. The Genie allows you to control the timing and intensity of your home fragrance whenever and wherever you happen to be, thanks to an app on your phone. It also adds a chic touch to your interior and can be operated manually if desired. Available in six different fragrances, you can alternate them to create a peaceful atmosphere at home.
<urn:uuid:b7b5a3d8-f099-4621-93f0-c377f05c8127>
CC-MAIN-2023-23
https://www.rituals.com/en-us/mag-home-living-scent-wellbeing.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656737.96/warc/CC-MAIN-20230609132648-20230609162648-00746.warc.gz
en
0.955502
886
2.734375
3
2.356735
2
Moderate reasoning
Health
Mind the gap - Equality and Partiality by Thomas Nagel Oxford, 186 pp, £13.95, November 1991, ISBN 0 19 506967 6 Sidney Morgenbesser says that ‘All Philo is Philo l.’ He means, I think, that nothing is established in philosophy. At any time everything can be turned around, and the front line is pretty close to base camp. A book by Thomas Nagel proves Morgenbesser’s point. There is no better philosophical primer than Nagel’s What does it all mean? A Very Short Introduction to Philosophy (1987). It works with its intended, adolescent audience, but (this is what supports Morgenbesser) it is also an education for old hands. If Nagel’s Introduction takes the beginner to the frontier, his less pedagogical writings bring the subject back to basics: his whole oeuvre shows, par excellence, that what Morgenbesser says is true. To explain the nature of Nagel’s achievement, I have to say something controversial (but not eccentric) about the foundational role of intuition in philosophy. Philosophers ask general questions such as What is knowledge? What is justice? Are we free? And they construct their answers to those questions under the constraint of intuitive belief – belief, that is, for which no (or only a very short) argument is given, because the opposite of the belief is considered to be something which, like a contradiction, cannot coherently be thought, or because the belief just seems right, even though its negation could readily be entertained. Intuitions bearing on the questions paraded above are, or might be, that I can know only what is true; that it is unfair for one person to have less than another through sheer bad luck; that, if I could not have done otherwise, then I did what I did unfreely. Intuitions can be mistaken. You can even wrongly think that something is not coherently thinkable, because, for example, you haven’t exercised enough imagination. But, fallible though they are, and although they vary in strength, and sometimes contradict one another, intuitions shape the search for answers in philosophy. And while harmony between intuition and the desired answer is a matter of complex negotiation, in which some intuitions may have to be disregarded, it remains true that a satisfying conclusion is at peace with intuition, or, anyway, with the stronger or less deniable part of it. Nagel has been at the centre of Anglophone philosophical endeavour for twenty-five years because he goes to, and operates with mastery at, the intuitive heart of every issue he addresses. He does not fit his arguments up with elaborate qualifications that are there to block all the objections professionals could push. Nor does he devise fancy counter-examples that shimmy through the holes in the arguments of others. He stays close to intuition, and, being both exceedingly sensitive to what it says and remarkably creative about what to do about what it says, he has had a transforming influence on many parts of philosophy. One example of the efficacy of Nagel’s operations with intuition is his essay ‘What is it like to be a bat?’, which was part of a campaign against ‘reductionist euphoria’ in the philosophy of mind. There is something it is like to be a bat. Since sonar perception looms large in bat consciousness, we do not know what that something is, and we know, intuitively, that perfect knowledge of bat neurophysiology would not relieve our ignorance. An essential truth about the mental, that there is something it is like to have a mind, escapes the notice of reductive materialism. Another example of Nagel’s intuitive penetration is his suggestion about our confidence that we enjoy freedom of choice: namely, that what threatens it is not, as we usually suppose, causal determinism, but something more general, which Nagel calls ‘the objective point of view’. From that point of view, what happens in the world is, precisely, a sequence of happenings, one event after another, and the agent’s awareness of a domain of choice that lies before her goes unrepresented, whether or not the train of events is deterministically conceived. ‘The real problem of free will stems from a clash between the view of action from inside and any view of it from outside. Any external view of an act seems to omit the doing of it.’ The examples indicate that Nagel is not just rare at identifying intuitive paths and shortfalls, but that he also proposes a theory of intuition, one that explains why we are burdened with the conflicting intuitions that keep philosophy going. We cannot forsake the objective point of view, yet we cannot pretend that there is not something it is like to be conscious, or that it is not up to us what we do next. We have philosophical problems because we find it hard to put together what the different points of view – subjective and objective – that we cannot choose not to occupy disclose to us. In an earlier diagnosis of the intellectual conflicts that engage philosophy, whose great exponent was Gilbert Ryle, the Dilemmas (1954) have an illusory character: they come because we misconstrue ‘non-competing stories about the same subject-matter’ as ‘rival answers to the same questions’. We get into a jam about free will because we let the ordinary discourse of personal responsibility run up against the extraordinary discourse of psychological theory. They should be kept apart, and the philosopher’s job is to put up ‘ “No Trespassing” notices’. In Nagel’s less sanguine alternative conception the ‘stories’ told from the discordant standpoints really do strain against one another (whether or not they concern the same subject-matter). For Ryle, you just have to be clear about what you’re trying to do. There’s this vocabulary, and then there’s that one: use one at a time and you can’t go wrong. For Nagel, there is an irrepressible drive to unify what the different standpoints disclose, a drive that it may not always be possible to satisfy. The subjective/objective polarity governs not only metaphysics but also ethics, and Nagel argues, in the book under review, that the task of political philosophy is to reconcile the opposed deliverances of two standpoints. In the personal point of view, everything gets its value from my distinctive interests, relationships and commitments. But I can also look at things impersonally, and then I realise that the interests and projects of others are just as important as mine are, that my life is no more important than anyone else’s is. The full text of this book review is only available to subscribers of the London Review of Books. You are not logged in [*] Persons to Nagel’s right will wonder about other certainties he displays: that a social democratic solution which ensures a high basic minimum but also allows large inequalities is an inadequate ‘response to the impartial attitude which is the first manifestation of the impersonal standpoint’, and that swingeing inheritance and gift taxes do not violate its second ‘manifestation’, which respects the individual’s desire to benefit his family.
<urn:uuid:c34a6f4a-1d63-46d6-a1e7-c46ed178be87>
CC-MAIN-2016-44
http://www.lrb.co.uk/v14/n09/ga-cohen/mind-the-gap
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721355.10/warc/CC-MAIN-20161020183841-00077-ip-10-171-6-4.ec2.internal.warc.gz
en
0.950253
1,554
2.6875
3
2.928663
3
Strong reasoning
Literature
So some people on Reddit were talking about Bahá’í jargon recently, and someone asked for the definition of the Five-Year Plan—because it’s been “evolving so much, I don’t know what it currently is anymore”. Here, then, is a stab at a definition. Literally, the series of Five Year Plans are simply global plans, carried out under the guidance of the Universal House of Justice, to implement the Divine Plan as elaborated by ‘Abdu’l-Bahá in His Tablets of the Divine Plan. There have been other “Five Year Plans” in the past, but the current series of four consecutive plans began in 2001 and will last until 2021, to be followed by further plans. The current series of plans has been characterized by two principal, complementary movements, which have remained the focus of each plan in the series: - The movement of increasing numbers of collaborators through the training institute process—which offers them training to offer specific, concrete acts of service, including but not limited to the “core activities”—study circles, children’s classes, junior youth groups, and devotional meetings; - The movement of clusters from one stage of development to the next, where each stage is characterized by a higher level of intensity, organization, and systematization. The first in the series of Five Year Plans (2001–2006) introduced these two complimentary movements, and provided an opportunity for national Bahá’í communities to define “clusters” as distinct geographical divisions within their countries. This was done to break down the task of measuring community development and growth to a more manageable sub-national level. This was also when most people were introduced to study circles and to the materials of the training institute. At this time, not many people grasped the purpose of the training institute, believing it to be yet another deepening program among many others. This perception gradually began to shift as Bahá’ís began to implement the institute process across the world, building up experience and reflecting on which kinds of implementations worked and which didn’t. Children’s classes and devotional meetings were also introduced as core activities, to be open to all. The second in the series of Five Year Plans (2006–2011) introduced the junior youth spiritual empowerment programme as an element of the plan, as communities worldwide identified the need to engage young people between the ages of 11–14 as a particularly receptive population. At this point, what’s now known as Ruhi Book 5 was added to the main sequence of institute courses, allowing participants in the institute process to receive training on how to engage and empower junior youth to arise and serve humanity. One of the main numeric goals of this particular plan called for the establishment of 1,500 intensive programs of growth in clusters around the world. This entailed the establishment in these clusters of a working, self-sustaining, and ever-expanding institute process in which new collaborators could be trained in specific acts of service and then arise to carry forward that same process. As Bahá’ís embraced the process and arose to serve, striving to understand what an intensive program of growth should look like in their clusters, a great deal of learning was generated that would inform future plans. The third in the series of Five Year Plans (2011–2016) set a new numeric goal of 5,000 programs of growth worldwide. In this case, the requirement was that there simply be a program of growth—i.e., an institute process operating at any level of intensity. At this point, many of the clusters that had established an intensive program of growth during the previous plan began assisting believers in adjoining clusters to establish the institute process there. The concept of “milestones” was also elaborated during this plan; using this terminology, the numeric goal for this plan was for 5,000 clusters (or fully one-third of all clusters worldwide) to reach the first milestone. It was also during this plan that the construction of new Houses of Worship were announced in several countries and clusters worldwide. The importance of nurturing the devotional character of a community through devotional gatherings become much clearer as Bahá’ís gained a better understanding of the connection between worship and service, and the unique role of the Mashriqu’l-Adhkár in community life. The fourth in the series of Five Year Plans (2016–2021) is the one we’re in now, and it calls for raising the level of intensity in each of the 5,000+ clusters targeted during the previous plan, so that each of these clusters can be said to have an intensive program of growth in place (i.e. a working, self-sustaining, and ever-expanding institute process). In other words, each of these clusters are to reach the second milestone or beyond during this plan. At this point, enough learning has been generated through the experiences of Bahá’í communities around the world that the framework of the plans is clear and needs only to be exploited to its fullest potential. tl;dr: An evolving series of plans with the overall aim of developing the capacity of more and more individuals, communities and institutions to serve humanity. Each plan in this series has had its own particular focus and goals, but each one has built on the last and served to carry forward two complimentary movements: The movement of increasing numbers of collaborators through the training institute process, and the movement of clusters from one stage of development (or organization/systematization) to the next.
<urn:uuid:8291b4aa-d453-4a3e-a2f7-a94f34b321a4>
CC-MAIN-2017-22
http://pizza.sandwich.net/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608617.6/warc/CC-MAIN-20170525214603-20170525234603-00013.warc.gz
en
0.971294
1,148
2.609375
3
2.965228
3
Strong reasoning
Religion
Mom and dad Mom wants talk with dad 2.2 hours. But dad wants of 2 times more hours than mom ... How many hours wants dad talk with mom? ... Did you find an error or inaccuracy? Feel free to write us. Thank you! Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. You need to know the following knowledge to solve this word math problem: Related math problems and questions: Dad has two years more than mom. Mom has 5 times more than Katy. Katy has 2 times less than Jan. Jan is 10 years old. How old is everyone in the family? How old are all together? How many times you can subtract the number 4 from the number 64? - Mom and daughter Mom is 30 years older than the daughter. What is the age difference between them in 35 years? Mom bought 13 rolls. Dad ate 3.5 rolls. How many rolls were left when Peter yet ate two at dinner? In the zoo was elephants as many as ostrichs. Monkeys was 4 times more than elephants. Monkeys were as many as flamingos. Wolves were 5 times less as flamingos. How many of these animals were together? We know that there were four wolves. - Your Mom Your Mom can clean your entire house in 3 hours. However, your dad takes 5 hours to clean the house. How long it will take for them to clean the house if they work together? (Write the numerical answer round to the nearest thousandth) - 3 cats 3 cats eat 3 mice in 3 days. How many mice will be eaten by 10 cats in 10 days? Patrick step is 65 cm long, and his dad step is 10 cm longer. Dad goes bridge in 52 steps. How many steps will go the same bridge by Patrick? In the hutch are 48 mottled rabbits. Brown are 23 less than mottled and white are 8-times less than mottled. How many rabbits are in the hutch? Martin has just as brothers as sisters. His sister Jana but has 2 times more brothers than sisters. a) How many children are in this family? b) How many boys and how many girls are in the family? - Add symbols Add marks (+, -, *, /, brackets) to fullfill equations 1 3 6 5 = 10 This is for the 4th grade of the primary school - with no negative numbers yet Television is the fifth most expensive and tenth cheapest TV. How many different TVs are in stores? - Rabbit family Rabbit family ate 32 pieces of carrots, small one ate 6 pieces, dad 5 more than the mother. How much ate mom? On the farm they have a total of 110 birds. Geese and turkeys together is 47. Hens is three times more than the turkey. How much is poultry by species? There are 28 bunches, and son ate 1/2, dad ate four bunches. How many of them remain on the baking dishes? Exactly after 114 hours, we sit down at the Christmas Eve table. What day and what time it was when Dad said this sentence. At the Christmas Eve table, they sit down exactly at the 18-o'clock (6 PM). - Buns and toasts Two buns weigh 10 grams more than two toasts. One Bun and two Toast weigh a total of 110 grams. How many grams weigh three toasts? How many grams does a Bun weigh?
<urn:uuid:10039210-ff59-44f2-a30f-480e0692d7bd>
CC-MAIN-2021-43
https://www.hackmath.net/en/math-problem/1571
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00445.warc.gz
en
0.975093
741
2.90625
3
1.163545
1
Basic reasoning
Science & Tech.
Italy is a diverse country that boasts miles of Mediterranean coastline, scenic lakes such as Lake Garda, snow-capped Alps and the sun drenched regions of Umbria and Tuscany. The country is home to a variety of native plants, many of which are used as seasonings across the globe. There are numerous flowers native to Italy that can be grown in the home garden. Silver thyme (Thymus citriodorus) is a garden hybrid that stems from common thyme (Thymus vulgaris) native of Southern Italy. Silver thyme is an herb that boasts small lilac flowers, which appear at the end of silvery green leaves. Reaching a height of about 6 inches to a foot, silver thyme has a mounding habit and a lemony scent that can be incorporated into a traditional herb or butterfly garden. The early summer flowering plant can be cultivated in USDA Hardiness Zones 4 to 9. Like most herbs, silver thyme requires full sunlight to thrive. A well-drained soil that's kept mostly dry is perfect for this plant, which requires very little water. A member of the figwort family, the common snapdragon (Antirrhinum majus) is a flowering short-lived perennial that hails from Italy, as well as North Africa. The plant reaches a variety of heights depending on the cultivar, from 4 inches to several feet. The common snapdragon offers columns of soft, silky blooms in a range of pastel colors, from pale lemon yellow to pink or lilac. Snapdragons are especially popular with children, as squeezing the flowers gently causes them to suddenly "snap" open. The common snapdragon is best suited to USDA Hardiness Zones 4 to 11, preferably in full sunlight. A rich, well-draining soil is ideal for the snapdragon, and the plant should be watered frequently until established. Big periwinkle (Vinca major) is a flowering herbaceous member of the dogbane family native to Italy and France. Rarely growing above a foot high, big periwinkle is a sprawling plant that offers a garden dull green leaves and lavender or "true blue" flowers. The plant is popular as a low-growing ground cover, or as a graceful hanging basket plant. Big periwinkle does best in full sun or partial shade in USDA zones 7 to 9. The plant should be cultivated in a fertile, loose woodsy soil for best results. Though big periwinkle will make do with dry soils, a soil that's moist to the touch is preferable.
<urn:uuid:6e62b481-083f-43b4-8356-c927e7cdb48e>
CC-MAIN-2013-48
http://www.gardenguides.com/123364-italian-flower-names.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164848402/warc/CC-MAIN-20131204134728-00044-ip-10-33-133-15.ec2.internal.warc.gz
en
0.945894
531
2.609375
3
0.822406
1
Basic reasoning
Home & Hobbies
The Making of Global World Class 10 Notes Social Science History Chapter 4 SST Pdf free download is part of Class 10 Social Science Notes for Quick Revision. Here we have given The Making of Global World Class 10 History Chapter 4 Notes. According to new CBSE Exam Pattern, MCQ Questions For Class 10 Social Science with Answers Carries 20 Marks. |Subject||Social Science Notes| |Chapter||History Chapter 4| |Chapter Name||The Making of Global World| |Category||CBSE Revision Notes| The Making of Global World Class 10 Notes Social Science History Chapter 4 The activity of buying selling or exchanging goods or services between people firms or countries. Global inter contentedness: As early as 3000 BCE (Before the Christian Era), an active coastal trade linked the Indus Valley civilization with present day West Asia. Thus, trade, migration of people, movement of capital, goods, ideas, inventions and many more have helped in creating a global world in ancient times. Christopher Columbus was the explorer who discovered the vast continent of America. He took the sea route to reach there. First World War: The war which broke out in 1914 engulfed almost the entire world. The war was fought in Europe, Asia, Africa and the Pacific. Because of the unprecedented extent of its spread and its total nature, it is known as the First World War. ‘Chutney music’, popular in Trinidad and Guyana is a creative contemporary expression of the post-indenture experience. It is an example of cultural fusion between Caribbean islands and India. Role of the ‘Silk route’: The routes on which cargoes carried Chinese silk to the west were known as ‘Silk routes. Historians have discovered several silk routes over land and by sea, covering vast regions of Asia and connecting Asia with Europe and Northern Africa. Even pottery from China, textile and spices from India and South Asia also traveled the same route. In return, precious metals like gold and silver flowed from Europe to Asia. Culturally, Buddhism emerged from Eastern India and spread in several directions through the silk route. Indentured labour is a bonded laborer under contract to work for an employer for a specific amount of time, to pay for his passage to a new country or home. Reasons why it can be described as new system of slavery: - Many migrants agreed to take up work to escape poverty and oppression in their home villages. They were cheated and were provided false information by the agents regarding their destination, modes of travel, the nature of work and working conditions. - Often migrants were not even told that they were to go on long sea journeys. - The tasks allotted to them on plantations were extremely heavy and could not be completed in a day. They were beaten or imprisoned. - Deductions were made from wages if the work was considered unsatisfactory. - Living and working conditions were harsh and there were few legal rights to protect them. A Com Law was first introduced in Britain in 1804, when the landowners, who dominated Parliament, sought to protect their profits by imposing a duty on imported com. This led to an expansion of British wheat farming and to high bread prices. Effects of Abolition of Corn Laws: This allowed the merchants in England to import food grains from abroad at lower costs — - It led to widespread unemployment in the agricultural sector. - It also resulted in the rise of a prosperous capitalist class in the urban areas. - Unemployment in the rural sector forced the movement of labor from agricultural to industrial sector. Europeans were attracted to Africa because: Africa had vast resources of land and minerals. Europeans came to Africa hoping to establish plantations and mines to produce crops ‘and minerals which they could export to Europe. The loss of cattle disease destroyed African livelihoods. Planters, mine owners and colonial governments now successfully monopolized what scare cattle resources remained to force Africans into the labor market. African countries were militarily weak and backward. So they were in no position to resist military aggression by European states. ‘Food offers many examples of long distance cultural exchange’: - Traders and travelers introduced food crops to the lands they traveled. Many of our common foods, such as potatoes, maize, soya, groundnuts, tomatoes, chilies and sweet potatoes came from America. - It is believed that noodles traveled West from China to become ‘Spaghetti’ or perhaps Arab traders took pasta to fifth century Sicily (an island in Italy). Indian ‘Rotis’ have become ‘tortillas’ in Mexico, America and western countries. - Europe’s poor people began to eat better and live longer with the introduction of potato. Economic effect of the First World War on Britain:’ - To finance war expenditure, Britain had borrowed liberally from US. This meant that at the end of the war, Britain was burdened with huge external debts, - The war had disturbed Britain’s position of dominance in the Indian market. In India, the nationalist movement had gathered strength and anti-British feeling had become stronger among common people. Promotion of Indian industries had become one of the objectives of the nationalist leaders, which adversely affected industries in Britain. - There was widespread increase in unemployment coupled with decrease in agricultural and industrial production. Cotton production collapsed and export of cotton from Britain fell dramatically. - Unable to modernize, Britain was finding it difficult to compete with U.S., Germany and Japan internationally. Rinderpest (cattle plague). An infectious viral disease of cattle, domestic buffalo, etc. Opium trade, the traffic that developed in the 18th and 19th centuries in which Great Britain, exported opium grown in India to China. The Great Depression. An economic situation in which most parts of the world experienced catastrophic declines in production, employment, incomes and trade. Began around 1929 and lasted till the mid-1930s. Great Depression in the US between 1929-30: - Agricultural Overproduction. Falling of agricultural prices had made it even worse. As the prices fell, the agricultural income declined. To meet this situation, farmers brought larger volume of produce to the market to maintain their small income. The excessive supply couldn’t be sold due to lack of buyers and farm produce rotted. - US Loan Crisis. In the mid-1920s, many countries financed their investments through loans from the US. The overseas lenders panicked at the first sign of trouble. Countries that depended crucially on US loans faced an acute crisis due to the withdrawal of US loans. It led to the failure of major banks and collapse of currencies. Although there was unprecedented economic growth in the West and Japan, nothing was done about the poverty and lack of development in countries which were earlier colonies. Thus, there arose a need for the developing nations to organised themselves into the G-77 group to demand a New International Economic Order (NIEO). NIEO meant a system that would give them control over their own natural resources, more development assistance, fairer prices for raw materials and and better access for their manufactured goods in developed markets. Bretton Woods Agreement: Tire main aim of the post-war international economic system was to preserve economic stability and full employment in the industrial world. A framework of the scheme was prepared. The famous economist John Maynard Keynes directed the preparation of the frame-work and it was agreed upon at the United Nations Monetary and Financial Conference held in July 1944 at Bretton Woods in New Hampshire in USA. According to the Bretton Woods Conference, the International Monetary Fund (IMF) and the World Bank were set up. IMF was set up to deal with external surpluses and deficits of its member nations and the World Bank was to finance post-war reconstruction. These two are referred to as Bretton Woods institutions or, sometimes, ‘Bretton Woods twins’. Decision making in these institutions was controlled by the Western industrial powers and the US even had Veto over their key decisions. The post-war economic system is often described as the Bretton Wood system. More Resources for CBSE Class 10 - NCERT Solutions - NCERT Solutions for Class 10 Science - NCERT Solutions for Class 10 Maths - NCERT Solutions for Class 10 Social - NCERT Solutions for Class 10 English - NCERT Solutions for Class 10 Hindi - NCERT Solutions for Class 10 Sanskrit - NCERT Solutions for Class 10 Foundation of IT - RD Sharma Class 10 Solutions We hope the given The Making of Global World Class 10 Notes Social Science History Chapter 4 SST Pdf free download will help you. If you have any query regarding The Making of Global World Class 10 History Chapter 4 Notes, drop a comment below and we will get back to you at the earliest.
<urn:uuid:923fae40-e1db-494c-9438-918ab87e7ad4>
CC-MAIN-2022-40
https://cbseboard.co.in/the-making-of-global-world-notes-cbse-class-10-social-sciences/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00109.warc.gz
en
0.960374
1,872
3.625
4
2.882544
3
Strong reasoning
History
This is a picture of Prof. Lewin, taken by Prof. Lewin. It was the "Astronomy Picture of the Day" (APOD) on September 13, 2004. It was also presented to his 8.03 students as a challenge to obtain extra course credit if they were able to explain this phenomenon. On December 7, Prof. Lewin revealed the physics (and demonstrated it) in lecture 22. Thesolution was also revealed by APOD to the many thousands of people who were puzzled by it and who tried to explain it. This course features a full set of lecture videos, as well as assignments, exams, and other course materials. » Download the complete contents of this course. In addition to the traditional topics of mechanical vibrations and waves, coupled oscillators, and electro-magnetic radiation, students will also learn about musical instruments, red sunsets, glories, coronae, rainbows, haloes, X-ray binaries, neutron stars, black holes and big-bang cosmology. OpenCourseWare presents another version of 8.03 that features a full set of lecture notes and take-home experiments. These are the collection of lectures notes . Our subjective is to help students to find all engineering notes with different lectures slides in power point, pdf or html fileat one place. Because we always face that we lose much time by searching in Google or yahoo like search engines to find or downloading a good lecture notes in our subject area. Also it is difficult to find popular authoress or books slides with free of cost. If you find any copyrighted slides or notes then please inform us immediately by comments or email as following address .I will take actions to remove it. Please click bellow to download ppt slides/ pdf notes. If you face any problem in downloading or if you find any link not correctly work or if you have any idea to improve this blog/site or if you find any written mistake or you think some subjects notes should be include then give your suggestion as comment by clicking on comment link bellow the post (bottom of page) or email us in this address [email protected]?subject=comments on engineeringppt.blogspot.com. I will must consider your comments only within 1-2 days. To find your notes quickly please see the contents on the right hand side of this page which is alphabetically arranged and right click on it. After clicking immediately you find all the notes ppt / pdf / html / video of your searching subjects. It is better to search your subject notes by clicking on search button which is present at middle of right side of this web page. Then enter your subject and press enter key then you can find all of your lectures notes and click on it. Thank you for visiting our site. Click here to download the files:- The following documents were made available to the students. They were shown and discussed during the lecture sessions noted in the table. Driven Coupled Oscillators (PDF) Analysis of a Triple Pendulum (PDF) Professor Walter Lewin demonstrating the two normal modes of a coupled pendulum. This demonstration can be viewed on the video of Lecture 5. (Image courtesy of Markos Hankin, MIT Physics Department Lecture Demonstration Group.)
<urn:uuid:db578657-7920-4866-b931-d44a4c6c9813>
CC-MAIN-2018-13
http://www.engppt.com/2009/08/vibrations-and-waves-pdf.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647498.68/warc/CC-MAIN-20180320150533-20180320170533-00191.warc.gz
en
0.939673
677
2.671875
3
2.160006
2
Moderate reasoning
Science & Tech.
A Macedonia by Any Other Name The Balkans desperately need help, but Greece won't stop picking a fight over what to call its northern neighbor. On Sunday, Feb. 4, the latest chapter of a 26-year dispute played out in Syntagma Square in central Athens. Hundreds of thousands of Greeks gathered in front of parliament to protest a potential deal that would conclude the dispute between Greece and the Former Yugoslav Republic of Macedonia (FYROM) over the name of the latter country. A march in Thessaloniki two weeks prior also brought to the streets hundreds of thousands. Another is to follow in Patras soon. And these protests seem to be only the beginning of a wave that will peak this summer, when Greece and FYROM expect to conclude negotiations. Since 1992, when Yugoslavia fell apart, Greece has objected to the use of the name “Macedonia” by its neighboring country. Despite the fact it has since been recognized by most other countries as the Republic of Macedonia, Greece’s reluctance to come to an agreement over the issue has stopped it from joining NATO and working toward joining the European Union. In theory, a deal should be easy: Greece and FYROM have already discussed proposals granting recognition to the latter for the name New Macedonia or Northern Macedonia. But the tightly packed crowds at Syntagma Square last month showed many Greeks will protest any name that includes the word “Macedonia” in any form — that is to say, they’ll protest any diplomatic solution at all. Under the shadow of a 140-square-meter flag hanging from a crane, the communal chant rang out: “Macedonia is Greek.” History and conspiracy The issue is clearly emotional, but the root of the problem is both historical and political. Since 1992, nationalist narratives have taken hold in both countries around the issue. In FYROM, especially under former prime minister Nikola Gruevski, the government tried to establish a direct link between modern inhabitants of Slavic Macedonia and Alexander the Great, a link which doesn’t stand up to scrutiny. Greeks on the other hand, see a plan by dark forces to take over the Greek province of Macedonia, perhaps an echo from a post-World War II plan by Josip Broz Tito and Greek communists to establish a republic that included FYROM and northern Greece as part of Yugoslavia. But in practical terms, both narratives are false. While the administrative region Macedonia extended into FYROM and Bulgaria under the Romans and the Byzantines, no link can be established between the ancient Macedonians and the Slavic populations that arrived almost a thousand years later in the area. But equally, for all the following centuries, these peoples were indeed called Macedonian alongside Greek and Bulgarian populations. Furthermore, it would be impossible for FYROM to enforce any claim to Greek territory, as its economic power and armed forces are minuscule compared to its southern neighbor. However, among the demonstrators on Sunday, it was commonplace to believe the rumor that FYROM’s constitution includes such claims Greek territory — even as there’s an explicit mention in that document that FYROM doesn’t have any border disputes or claims toward its neighbors. Still, the narrative persists. “They are communists I’m telling you, they’ll sell off everything,” I overheard in a conversation next to me at the Syntagma protest. Prior to the demonstration, politicians of all shades had done much to fan conspiracy theories. Sofia Voultepsi, an MP with New Democracy, the opposition party in the Hellenic Parliament, said in an interview that “Tsipras has made a deal to sell off Macedonia in exchange for a debt haircut.” The same MP, who had served as a parliamentary spokeswoman in the past, had once accused the BBC of being ran by arms dealers. An upended political landscape But it’s a left-wing politician who has become the symbol of resistance to compromise: legendary composer Mikis Theodorakis, now 92 years old, who is widely known for his struggle against the Greek 1967 junta. Theodorakis, whose house was attacked with paint by anarchists the previous night over the Macedonia issue, said in a speech during the rally: “Yes, I’m a patriot, internationalist; I disdain fascism in all its forms, especially in its most deceitful and dangerous one, the left-wing one.” It is a major departure for a man whose name has been connected intimately with left-wing culture in the country. But it’s also symbolic of how the issue has upended many certainties in Greece. For the ruling leftist Syriza party, it creates a headache on the domestic front while they are under pressure by the United States to conclude a deal. Their coalition partners, the hard-right Independent Greeks party, although unwilling to collapse the government over the issue, are more sympathetic to the protesters. Opposition parties were quick to try and capitalize on these tensions. But for them, too, the issue poses problems: The center-right leader of the conservative New Democracy party, Kyriakos Mitsotakis, was too slow to take a position on the issue, and when he did it was simply “we shouldn’t come to a deal now, let’s leave it for some other time in the future.” This allowed the powerful far-right faction of the party, headed by deputy leader Adonis Georgiadis, to seize the spotlight and push for a “no deal that includes the name Macedonia” position, which was quickly adopted by protesters. ND also fears that splits in the party over the issue could result in the creation of a new party further to the right and modeled after Italy’s populist, anti-immigrant Northern League. The international implications However much the Greek political landscape might be impacted by the ongoing negotiations, it’s developments in its northern neighbor that should be perhaps of more concern. Thanos Dokos, director of ELIAMEP, Greece’s most prominent foreign-policy think tank, told FP, “There is concern about stability in the western Balkans, which as far as the U.S. are concerned, centers around Russia’s efforts to gain footholds in the region.” Costas Douzinas, a Syriza MP who heads parliament’s committee on foreign affairs and is a law professor at Birkbeck College, University of London, agrees: “The Western Balkans find themselves in a trajectory of the formation of new nationalisms and unstable governments. The first priority is to avoid the breakdown of this geopolitical region, to not return to the ’90s.” FYROM has faced an onslaught of challenges to its political stability in the past decade. Gruevski’s government, mired in scandals, made a desperate grasp for power by attempting to politicize the judiciary and cracking down hard against protesters who demanded more democracy and transparency. Meanwhile, tensions between the country’s Slavic and Albanian populations have shown worrying signs, as evidenced by the violent clashes in the northern region of Koumanovo between an armed ethnic Albanian faction calling themselves the National Liberation Army and the national police in 2015. Gruevski, in his growing desperation following the wave of protests, and sensing he had lost the United States’ backing, turned to Russia for diplomatic support, sounding the alarm for both NATO and the EU. The solution promoted by the EU and the United States, is for Macedonia, alongside Serbia, to be inducted into NATO and the EU. “The EU realized it had neglected the Balkans, which has resulted in pretty ugly situations in economic, political, and security terms in the region,” says Dokos, the think tank director. “We’re seeing them now opening up again the possibility of expansion, mostly with Serbia and Montenegro, but other countries, too, to signal they if they’re willing to try, the door is open in the immediate future.” The prospect has significant implications for FYROM, but only if it can sort out its relationship with Greece. Athena Skoulariki, a lecturer in the University of Crete who specializes on the Macedonian issue, stressed that tensions with Athens “has created a lot of problems with the country’s accession program.” But a window of opportunity has opened for Greece now that Gruevski, the main culprit behind the promotion of the nationalist rhetoric around FYROM’s supposed origins, has lost power amid a swelter of corruption probes against his government. Along with a new prime minister, a new atmosphere — one more conducive to compromise on the naming issue — has emerged in FYROM. Domestic political football Whether Greek politicians will be able to take advantage of it however, is another issue entirely. In exchange for Greece’s recognition of the name New Macedonia or Northern Macedonia, FYROM has agreed in principle to cease publicizing any official links to Alexander the Great and ancient Macedonia, The country has already renamed an airport and highway as a gesture of good will. But this hasn’t placated feelings among Greeks. And it’s difficult to spare any of Greece’s political factions from blame. “The handling of the situation by the Greek government on a strategic level was positive, because they realized there was a window of opportunity to close this issue once and for all,” Thanos Dokos said. He blames the government, however, for avoiding any compromise with opposition parties, by springing the issue on them without consulting them first. “On a tactical level, reaching some understanding with other political parties and preparing public opinion, there I’m afraid they didn’t handle it well. They saw it as an opportunity to impale the opposition.” From the government’s perspective however, it is the opposition New Democracy that is being opportunistic. Douzinas, writing for the left-wing online daily EfSyn, put it thus: “New Democracy is not simply registering its different perspective on the issue but is actively trying to undermine trust [in the government] and create an atmosphere of terror, which would make citizens entrench themselves in their barricaded home and their ‘surrounded’ and ‘under threat’ country. Their aim is not the salvation of the ‘nation’s soul,’ but the creation of insecurity that will turn citizens against the government.” “Greece, as a country, has double the GDP than all the western Balkan countries put together.” he tells Foreign Policy. “It’s in Greece’s best interest to stabilize the region. Because Greek businesses are involved in the process, normalization will be an economic boost for Greece, too. We can be in a mutually growing with our northern neighbors, and their path toward the EU is evidence of that.” Dokos agrees that a failure to come to a deal would be a very bad outcome for both countries: “[If there’s no agreement] the current name might become permanent, a country that could be partner might be estranged and find itself open for others to play games in the Balkans, which Greece doesn’t want us the Balkans are its natural hinterland.” The stakes are high. “I’m less optimistic than I was two weeks ago,” Dokos says. “I’m not saying I’d rule it out, but I don’t think the odds are with coming to a solution at this point”. It’s indeed hard to see the room for compromise right now. But Greece, the EU, and NATO have too much riding on the outcome to let the present opportunity go to waste, however strong the emotions of everyday Greeks.
<urn:uuid:7595ec90-3b6a-4b56-8bf0-1fc07af574ad>
CC-MAIN-2018-47
https://foreignpolicy.com/2018/03/06/a-macedonia-by-any-other-name/
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743248.7/warc/CC-MAIN-20181117020225-20181117042225-00469.warc.gz
en
0.963573
2,474
2.625
3
2.994383
3
Strong reasoning
Politics
Flower Bulbs Are Not Tubers First of all, we would like to clear up a widespread misunderstanding: Bulbs are not Tubers! A bulb is in fact a complete plant, lying all curled up, waiting to unfold. If you cut a bulb in half, an onion for instance, all the tunics and eyes are visible inside the bulb. These are the future stalks, leaves and flowers. A Tuber is not a complete plant but an accumulation of nutrients with buds on the outside, like the potato. Although we call them 'spring bulbs', they should be planted in autumn. Not until after the cold winter spell and they develop into an amazing flowering plant. Bulbs can be planted from the beginning of September through to December. As long as the ground is easy to work, the bulbs can still be planted, even after the first frost. Do keep in mind that early-planted bulbs will flower earlier too. You could make use of that by planting bulbs of one type with intervals of a few weeks. This way you can prolong the flowering period in a natural way. Some bulbs, early-flowering ones in particular, prefer a slightly warm soil at the time of planting so their root systems can develop faster. To plant flower bulbs you first need to choose and buy or order them. This is something you should do at an early stage, as gardeners who buy first have the widest choice. The planting time for the earliest flowering bulbs begins in September. Crocus, Galanthus (snowdrop) and Eranthis (winter aconite) are the first to buy. They bloom early so it is preferable to plant them early in order that they are given sufficient chance to develop a good root system. That will happen fairly quickly in a border that is still warm from the summer sun; a soil temperature between 5º and 10º C is ideal. All bulb crops that flower later can be planted in October and November. Try to arrange for the bulbs to be delivered just before you intend to plant them. They are sent directly from the nursery and are in optimum condition. If circumstances force you to put off planting them until later, remember to unpack the bulbs as soon as they arrive. Place them in a dry, dark place with a temperature under 20º C and open the bags so that sufficient air can circulate. If you have ordered bulbs that dry out quickly, keep them in a tray containing sand or peat dust. Types that are prone to drying out include: Allium ursinum (ramson), Anemone, Eranthis (winter aconite), Erythronium (dog toothed violet), Galanthus (snowdrop) and Leucojum vernum (spring snowflake). Most other bulbs are not so sensitive, partly because of years of experimental adaptations, which reduce the chances of, for example, fungal viruses developing.
<urn:uuid:33c177f4-bc22-41df-bb27-d825e8baca6a>
CC-MAIN-2020-10
https://en-gb.bakker.com/pages/flower-bulbs-are-not-tubers
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00033.warc.gz
en
0.952565
601
3.21875
3
1.7975
2
Moderate reasoning
Home & Hobbies
Why does my knee hurt after running? Often it happens that after a run the knee hurts, withThis discomfort is a cutting or pricking surface character. The hearth is located on the inside of the knee joint. No signs of redness or swelling of the skin. The frequent reason is that after running the kneehurts, is trauma (chondromalacia of the patella). It arises because of unnatural and incorrect movements of the joint. The characteristic signs of this pathology are dull pain sensations or stiffness near the patella. The reasons for why knee pains after running,often the wrong moves. As a result, friction, irritation and inflammation of the surface of the joints and ligaments occur. Incorrect movements are observed in the following cases: 1. When running on uneven terrain. The roots of trees, sloping sections of the road, as well as pits and hillocks cause the ankle and foot to work incorrectly. These movements are transmitted to the knee. 2. With the wrong technique of running. Torsion of the trunk during movement and outwardly turned out feet cause the effect of side force on the knee. 3. With flatfoot. This deficiency affects the work of the foot and is transmitted to the knee. 4. At low quality or high wear of running shoes. 5. With overstrain and inelasticity of muscles. This phenomenon causes unnatural movements in the knee. 6. In the case when a run-up warm-up was performed before running. Proper stretching helps to improve the elasticity of the muscles, preparing them for intensive work. The knee is very sore under excessive loads. In this regard, it should be less to run along the asphalt road in unsuitable footwear. The reason that after the running knee hurts, the unpreparedness of the joint to heavy loads can also become. In the event that the unpleasantsensations, training should be immediately completed. Resume lessons can be after one or two days, if this does not disturb the pain. It is necessary to take precautions to prevent the occurrence of an unpleasant phenomenon. If the pain does not go away, the training should be abandoned for at least a week. For the rapid elimination of pathology, the use of ice and ointments of an anti-inflammatory nature is recommended. The most commonly used drugs are the following: "Indovazin", "Ibuprofen", "Voltaren", etc. During the break between training, in the absence of pain in the knee, you need to switch to other types of exercise. It is recommended to swim, ride a bike, ski or skate. In the event that, after running, the knee hurts,pathology can be cured in a short time. It is not a chronic trauma requiring long-term therapy and causing relapses even with small overloads in the future. To prevent a state when after a workoutThe knee hurts, it is necessary to run on an equal surface with observance of necessary technics. If there is flat feet, you need to purchase special orthopedic insoles and remember to stretch the muscles before running. Particular attention should be given to the choice of shoes and adhere to the optimal intensity and volume of training.
<urn:uuid:839c1f2c-e19d-4cf9-8859-2a851502b175>
CC-MAIN-2018-51
http://flashbatconbi.info/why-does-my-knee-hurt-after-running/
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823339.35/warc/CC-MAIN-20181210123246-20181210144746-00429.warc.gz
en
0.941762
672
2.6875
3
1.797391
2
Moderate reasoning
Sports & Fitness
In this article, we will discuss different perspectives on patterns of legitimizing arts education. By different patterns we mean different justifications and logics in how arts education is argued for in the scientific as well as in the political sector. We will propose a brief definition and systematization and present our discussion with international colleagues at the “Cultural Policy and Arts Education” panel at the International Conference on Cultural Policy in 2014. At the International Conference on Cultural Policy (ICCPR) – hosted by the department of Cultural Policy at the University of Hildesheim from 9–13 September 2014 – we discussed patterns of legitimizing arts education with experts from different countries. One finding of the conference was that the need to legitimize arts education is a crucial topic in many countries because arts education is mostly considered a minor topic compared to other subjects such as economics, natural sciences, etc. that are taught at schools, at universities, or make it onto the political agenda. Yet there is also a strong belief in the potential of arts education, which is reflected in statements made by researchers and politicians and expressed, for example, in the Seoul Agenda. The Seoul Agenda is a result of the Second World Conference on Arts Education in Seoul 2010 and has raised high expectations of arts education fostering development in society. Its goal number three states accordingly: “Goal 3: Apply arts education principles and practices to contribute to resolving the social and cultural challenges facing today’s world 3.a Apply arts education to enhance the creative and innovative capacity of society 3.b Recognize and develop the social and cultural well-being dimensions of arts education 3.c Support and enhance the role of arts education in the promotion of social responsibility, social cohesion, cultural diversity and intercultural dialogue 3.d Foster the capacity to respond to major global challenges, from peace to sustainability through arts education.” (UNESCO 2010:8) In the German context, this goal and its subitems have been received with skepticism. Because of Germany’s experience with the Nazi regime, the German constitution guarantees the freedom of art in article 5 (3). The Nazi’s instrumentalized art for their purposes and the legitimation of the political regime. Against this backdrop, using the arts for other purposes than enriching people’s lives is seen very critically in Germany and constitutes a sensitive point that causes many discussions. Before summarizing what we have learned from the international debate, we will outline our understanding of Germany’s system of arts education. A definition of „arts education” There are many different definitions of arts education in Germany. Below we present our definition for the specific field of research that we focus on. In so doing, we use the German terms because there is no exact translation and their significance has to be explained. For example, there are two terms for education in the German language: Erziehung, which can be translated as education, and Bildung, which means not only education but also cultivation, formation, and even culture in an objective sense. From our point of view, there are three major approaches to arts education (Reinwand 2012:108ff.): künstlerische Bildung (“artistic education”), ästhetische Bildung (aesthetical education), and the more comprehensive Kulturelle Bildung (cultural education). (Vanessa Reinwand: Künstlerische Bildung - Ästhetische Bildung - Kulturelle Bildung) Künstlerische Bildung (“artistic education”) refers to education (often in special schools or higher education institutions) in the arts. Participating in künstlerische Bildung is playing the piano, dancing, or painting. One learns certain techniques and the historical background of the works of art. Ästhetische Bildung (aesthetical education) is a broader subject that includes künstlerische Bildung but also involves sharpening the senses and strengthening the ability to express oneself. This does not have to happen only in contact with works of art but can also occur in dealing with everyday objects, observing nature, or listening to sounds like the noise of the street. The core of ästhetische Bildung is to detect the intensity of experiencing and the possibilities of using and educating the senses. Finally the term Kulturelle Bildung includes both künstlerische and ästhetische Bildung and is composed of the rather weighty German terms Kultur and Bildung. On the one hand, Kulturelle Bildung describes a biographical, self-educational process in dealing with music, dance, painting, or any other aesthetic practice in an active (i.e., practicing arts) or a reflective way (i.e., perceiving arts); on the other hand, it is an expression regularly used in reference to the social domain of out-of-school institutions or informal settings in Germany where one can learn and be taught about arts or take part in different arts education activities. This sphere has developed to a considerable extent since the 1970s and has today evolved into a complex field involving many stakeholders. In contrast to school learning, Kulturelle Bildung also has certain pedagogical implications. For example, there are principles such as the voluntary nature of these activities, the idea of autodidactic learning and studying without a fixed curriculum, or involvement and an interest in strengthening abilities instead of focusing on faults. This different understanding of educational processes in the domain of Kulturelle Bildung as opposed to the normal school scenarios often results in misunderstandings between teachers and actors in Kulturelle Bildung contexts. Thus, these different understandings can make it difficult to transfer the concept of Kulturelle Bildung to schools. Schools generally offer only music or visual arts lessons and sometimes theatre. Teachers have to work by a certain curriculum and often there is no time for the pupils’ personal art activities. There is a strong systemic pressure on teachers to act in congruence with the organizational demands of the institution. Except for extraordinary projects in which an entire school is involved and perhaps artists from the outside are hired, art and music lessons become less and less important in schools – in contrast to PISA subjects like math or languages (Programs for International Student Assessment, PISA, of the Organization for Economic Co-operation and Development, OECD). The significance of arts education compared to other subjects in school is continuing to decrease, whereas the efforts in informal Kulturelle Bildung seem to be rising; at least if one believes the speeches of politicians. This shows that in Germany one has to make a sharp distinction between arts education in daily school life and arts education outside of school or in short-term school projects. However, over the past years, there have increasing efforts – for example, by private foundations – to bring more Kulturelle Bildung to schools or, even going a step further, to change school systems by incorporating Kulturelle Bildung into daily school life. On this point, we started a lively debate with our international colleagues whether this is possible: Is it really possible to change schools by involving out-of-school actors such as artists or cultural agents? Would it not be more successful to start by changing teacher education or making the cultural sector more interesting to pupils? We also have to ask ourselves critically, why are we interested in bringing more arts education to pupils, adults, or seniors to begin with? These questions led us right to the heart of our research topic: Why do we – and our international partners – think arts education is important and has to be implemented in schools? There are different patterns of legitimizing arts education. One suggestion is to differentiate five different approaches, as proposed by Eckart Liebau, the head of the UNESCO Chair for Arts Education in Germany (Liebau 2013:68ff.): - The economic approach focuses on the direct economic effects of arts education such as the growth of creative industries or professional services. Here, arts education is not demanded for its own sake; rather it is its side effects that are important. - The second approach is based on the heritage or diversity argument that is often emphasized by UNESCO. Arts education is important to save and preserve cultural heritage and the diversity of cultural expressions. - The third approach is a social-political one: “Here, arts education is seen as a socially therapeutic means of high potential” (Liebau 2013:69). Arts education serves to empower underprivileged people and to give structure to their lives. - The approaches four and five either focus on arts education’s potential to advance personal or social development. Both aspects are contained in our definition of Kulturelle Bildung and, according to Liebau, are found especially in Europe. The “subjective approach” emphasizes the development of personality through the personal biographical experience of art. The “social approach” sees the purpose of arts education in developing new forms of artistic expression and in promoting innovation in the arts. In other words, the idea is to produce novelty to keep the art institutions running and support audience development. In the discussion with our international partners, other approaches were added. One approach focuses on side effects. Here, the arts are used to support learning in other subjects and develop academic skills. The popular journalistic slogan “Music makes you smart!” is a good example of this approach. In the same vein, the new German federal program for underprivileged children “Culture empowers people!” (Bündnisse für Bildung. Kultur macht stark) alludes to expectations that arts education can produce tremendous side effects. - In other countries arts education has gained an important role in building and developing national identity. This approach that arts education could be helpful in preserving national identity is not very common in Germany and in the Nordic Countries. In Germany, there is little debate about national identity outside of discourses on cultural foreign policy or cultural heritage, owing to its historic experience with the brutal Nazi regime and the Second World War in the context of which national identity played a major role in the political system. - There is another important approach referred to “art as a need.” In this case, arts education is important because the arts represent a special way for humans to express their ideas and communicate. People in different circumstances have the intrinsic need to do arts – regardless if they live in times of peace or in troubled regions such as Egypt, Syria, and other Arab countries. In this respect, art has the same right of existence as language. - Another contribution to the discussion was that we should not get entangled in self-justification and stop problematizing our subject. Why is there no need for legitimizing other subjects such as math or chemistry? We should be careful not to put ourselves in a vulnerable position and should rather focus on discussing aspects of quality than of legitimization. - After summarizing these approaches, which to a greater or lesser extent instrumentalize arts education for other purposes, Sigrid Røyseng, a Nordic researcher at the Department of Communication and Culture at BI Norwegian Business School, proposed another concept. As opposed to the technical rationality of instrumentalism, she refers to the concept of ritual rationality in an anthropological sense. In the theory of ritual – rediscovered by the anthropologist Victor Turner – there are different phases, for instance, the liminal phase. “Liminality is seen as a quality of ambiguity or disorientation that occurs when the participants in the ritual no longer hold their original status, but have not yet been reintegrated with a new social status. In the liminal phase the participants of the ritual will often meet some kind of (supernatural) powers. The ritual establishes a new way of structuring their identity, time or community” (Røyseng and Varkøy 2014:110f.). This perspective is more in the line of justification than instrumentalization and sees arts and culture as transformative forces. So what does all this mean in our context? Perhaps some of the beliefs reflected in the different patterns of legitimization can be useful for the practice of arts education. Only the belief that things can change our lives can help us take action. In the following sections, we will sketch the most current patterns of legitimizing arts education in Germany and to what extent they are useful for research. Which patterns of legitimizing arts education are currently being discussed in Germany? Arts education in Germany is regarded as a human practice that helps us come into contact with ourselves, others, and our whole social environment, and it is therefore important that everyone has access to it. It is a basic human right that everyone should have the possibility and capability to participate in the cultural life of society. Arts education is a way of bringing this right to life. Another variety of the argument that art has a right to exist for its own sake is discussed (along the lines of Bourdieu) primarily as a means of distinction from the rest of society. Art is used as a mechanism by members of the bourgeoisie to distinguish themselves as connoisseurs from the so-called uneducated and the ignorant. According to this logic, patterns of legitimization other than the one revolving around autonomy are dangerous because they exploit the arts. In this perspective, the autonomy or freedom of the arts is the only valid argument. There also exists a long tradition of using the arts for social improvement and as a method in social work. Another prominent pattern of looking at art is as a medium for self-education and personal development. Over the last 15 years or so, arts education has experienced a boom, especially in politics and in some parts of the economic sector. This boom has mainly been caused by legitimizing arts education with reference to two other factors: - Firstly, a hope for a broad gain in creativity that would contribute to a rise of the creative industries (the economic approach). - Secondly, the findings of PISA constantly show that pupils’ academic success statistically correlates with their parents’ socio-economic status. The chances of underprivileged pupils breaking through this barrier are very low. There is thus the hope of solving social and educational inequalities by introducing arts subjects and by organizing a bit more theatre or a few music projects. This can be called the partake approach. Which patterns are useful in a research environment? Which ones are only political? From an academic point of view, the economic and the partake approaches are not very useful for the discussion and advancement of arts education. There is no proof that dealing with arts makes a person more creative. There is also no knowledge as to what kind of arts education could make a society more creative and therefore deliver economic benefits. However, the arts do not exist by themselves; there is always a social and political context to consider in order to understand why and how people engage and are involved in arts education. One of our international partners, Clive Gray (Professor of Cultural Policy Studies at the University of Warwick), came to the radical conclusion: „Evidence is rubbish!” There are always certain beliefs in arts education, and since this is so, we as researchers should take a different approach. We should not play this political „game” but observe policy making instead: we should describe the approaches applied in legitimizing arts education; we should criticize beliefs not substantiated by proof; and we should finally deconstruct legitimizing patterns. But we have to be aware that there are different logics at work at the scientific as well as at the political level andthat lobbyism uses research and prepares „packages” for politicians. We also play that game, but as researchers we at least have to lay our cards on the table. From a research point of view, arts education is not really very well suited to solving the complex problems of society that exist because there is a selective school system and a competitive and profit-oriented economic system. Problems of disparity should be solved by political transformations and not by offering a drop in the bucket and overburdening arts education with hopes it cannot and should not be expected to fulfill. Therefore a suitable education system that produces less inequality should certainly not build on arts education alone. Yet arts education, seen from the perspective of its potential as a transformative force, can play a crucial part in providing a broad, general academic basis that includes a range of different abilities and multiple ways of learning and forms of expression. All in all, it is important that the public comes to know arts education not as entertainment for privileged people or something that has to be legitimized by it being useful for other purposes but as a crucial part of basic education for all. Arts education cannot perform miracles in making people more creative or changing the school system into a better one. But dealing with the arts can help people manage their lives in better ways, and the potential that it offers should not be refused to anyone. What then are the challenges for research in arts education in view of these legitimizing patterns? Even if arts education is considered a human right and supporting it should therefore not require evidence of further positive side effects, we think more research in arts education is needed. In Germany there is a lack of an academic discipline called arts education or Kulturelle Bildung. Researchers who deal with this subject come from many different disciplines such as pedagogy, psychology, neuro-sciences, sports science, or philosophy and typically do not feel associated with the discipline of Kulturelle Bildung specifically. Therefore there is a lack of an academic arts education community and of systematic efforts to promote junior scholarship in the field. To begin filling this gap, we established the Federal Network for Research in Arts Education in 2010, which at the moment is the only German network of this kind. Every year we help to organize a federal interdisciplinary congress on research in arts education, which is hosted by a different university each year. With the support of the German Federal Ministry of Education and Research, we initiated a young academics network of PhD candidates, who are expecting to be awarded their doctorate in different disciplines at different universities but all feel at home in Kulturelle Bildung and meet regularly to discuss their work in progress. The members of this federal Network for Research in Arts Education are convinced that in Germany - we need more interdisciplinary research studies that pursue a common research question from the viewpoints of different disciplines; - we need more young academics who are interested in arts education processes; - we need theoretical research that focuses on basic questions in the different areas of art and develops good theories on, for example, what is it like to play music, dance, or paint and what aesthetic processes occur if you immerse yourself in different forms of art; - we need more research on outcomes, but not primarily with a focus on side effects but on individual and biographical situations in which the arts play a crucial role; - we need more longitudinal research; - we need to observe the different pedagogical situations in which arts appear; - we need to investigate the social and political role of arts education and take a critical approach in the process; - and we finally have to worry about transferring the insights of our research to the everyday reality of arts education to make its practice a better one. Of course we must not forget the social and political contexts that influence the impact and beliefs in arts education. They are very important indeed! Yet we should not build our debate on political beliefs alone; instead, we should concentrate – at least to some degree – on findings from research.
<urn:uuid:eb5a5056-9abb-45f5-b147-c0ec339af831>
CC-MAIN-2022-21
https://www.kubi-online.de/index.php/artikel/art-for-arts-sake-international-patterns-of-legitimizing-arts-education
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00485.warc.gz
en
0.949063
4,035
2.953125
3
3.02055
3
Strong reasoning
Education & Jobs
Skip to 0 minutes and 10 secondsLike many European cities in 2014, Bristol marked the 100th anniversary of the outbreak of the First World War - a conflict that dramatically changed the map of Europe. We know that it was the assassination of a single man, Archduke Franz Ferdinand that started that conflict. But what exactly was he Archduke of and why did this trigger a conflict that scarred the whole of Europe? Imagine a map of Europe. If we turn back the clock 100 years we would still recognise Skip to 0 minutes and 41 secondsmany of the countries: France, Spain, Holland, Denmark. These all have more or less the same boundaries and shapes as they do today. When it comes to the south-east and centre of Europe however, things are very different. There is one very large country which no longer exists. We refer to this in English as Austria-Hungary, or the Austro-Hungarian Empire, even though that was not its name for most of its existence, and neither of the modern states which go by the names of Austria and Hungary covers the same territory. Skip to 1 minute and 16 secondsSo let us look at that map from 100 years ago: Here is Austria-Hungary before the First World War. You will recognise the names of some of its regions. Austria-Hungary was the product of hundreds of years of military conquests and diplomacy by its imperial rulers, the Habsburg royal family. It was said that while other dynasties fought their enemies, the Habsburgs ensured that they married their sons and daughters to other ruling families. And this is Franz Joseph the I, Emperor of Austria-Hungary from 1848 to 1916, uncle of Archduke Franz Ferdinand. His official title stretched over two pages Skip to 1 minute and 57 secondsand named all of the territories he ruled: Franz Joseph the First, by God's Grace Emperor of Austria, King of Hungary and Bohemia, King of Lombardy and Venice, Dalmatia, Croatia and so on. Where maps of this lost Empire become interesting for the historian and linguist is when they represent the different language speakers and where they were at the time. Here is the language map of Austria-Hungary before the First World War. In the terms of the Habsburg bureaucracy these language groups were referred to as 'Nationen' or 'nations'. The English rubric of the time calls them 'races'. Nowadays we would say ethnic or linguistic minorities. And that in itself is an important point. Skip to 2 minutes and 43 secondsEveryone in this patchwork of languages was in one sense or another in a minority. It would not be too bold to suggest that this map predicts much of what follows in the 20th century - if we assume that the course of that century is mapped out by conflicts over what is and is not a 'nation', an ethnicity, an identity. Skip to 3 minutes and 4 secondsAs the Yiddish linguist Max Weinrich once put it: 'A language is a dialect with an army and a navy'. Each of the groups on our map will struggle to achieve national status for their language over the following 100 years. Some of these conflicts, like that in Ukraine, part of which was in the Austro-Hungarian Empire, are not over. If we look at the different groups represented on the map we can use some examples to show what I mean. Let us take a group at the centre - the Hungarian speakers, or Magyars as they call themselves. In 1912 they represented around 20% of the population, second largest group after the German speakers. Skip to 3 minutes and 46 secondsAll of the Magyars were within the boundaries of the Empire and most in the half that was ruled from Budapest, and most of them were settled in a geographically discrete territory - with a smaller population to the East in Transylvania - part of modern-day Romania. It should therefore come as no surprise that the Hungarian-speakers achieved a political settlement - known in English as the 'Austro-Hungarian Compromise' - by the middle of the 19th century, but the political settlement they achieved was an uncomfortable one that left them - or at least the Hungarian aristocratic land-owners who represented them in the Budapest parliament - in a territory in which they were outnumbered by Romanian, Polish, Ruthenian (that is Ukrainian) and German-speakers. Skip to 4 minutes and 33 secondsOther groups who were not as numerous did not fare as well in their aspirations - the Polish speakers (who made up 10%), Romanian speakers (who were 6%) or the tiny minorities of Italian speakers (2%) or Slovenes (3%). These groups had little leverage at national level but could dispute the use of their language locally. Some groups - for example, Czech speakers had more leverage than their numbers (at 13%) might suggest. The Czech speakers were all within the half of the Empire ruled from Vienna where they made up 23% of the total population and managed as a result to mount effective political campaigns for 'language rights' within their territories. Skip to 5 minutes and 17 secondsThe final years of the Habsburg Empire can easily be portrayed as a series of conflicts about language and identity in which the German-speaking Habsburgs tried against the tide of national movements to hold on to their historical legacy of a multilingual 'multinational Empire'. But that is to suggest that there was a clear-cut Austro-Hungarian identity holding the entire patchwork together. The problem was that there was not. The largest of all the minorities were the German-speakers making up 24% of the total. But they were also the most scattered. Austrian German-speakers had described themselves as 'German' - deutsch - for centuries. But since 1871 that word also meant belonging to a specific nation, to Deutschland an empire, das deutsche Reich. Skip to 6 minutes and 10 secondsAt the opposite end of the scale were the Serbian speakers, among the smallest of the many minorities in the Austro-Hungarian Empire. The south-eastern province of Bosnia had been joined to the Empire in 1878 but only formally annexed in 1908. It contained most of Austro-Hungary's 3.8% of Serbian-speakers. So in a sense it is no surprise that the action of one Bosnian Serbian nationalist in 1914 in assassinating Franz Joseph's nephew Franz Ferdinand, the heir to the Empire that vanished, started a chain of events that led to the First World War. Skip to 6 minutes and 47 secondsAnd we should not be too surprised that Hitler's annexation of the Sudetenland and then German-speaking Austria, both of which were provinces of the same Habsburg Empire, formed the prelude to the Second. 'Shepherd's map of the Habsburg Empire' with Dr Ian Foster This video will tell you about the ‘Shepherd’s map of the Habsburg Empire’. It’s OK to pause the video or to watch it as many times as you like. The video subtitles do not show the accented characters, unfortunately. To see them you might like to refer to the transcript that is available below in the section ‘downloads’ - click ‘English Transcript pdf’. Prior to 1945, the spelling “Hapsburg” was commonly used in English language documents. Since the 1960s “Habsburg” has been used consistently in English-language texts, hence our preferred spelling in this course. (After you watch the video and click the pink circle ‘mark as complete’, move on to the next step and read the article ‘More about the image/map’.) © University of Bristol, produced by Beeston Media
<urn:uuid:85661cda-b3de-48fe-9a68-4e12a33a3af8>
CC-MAIN-2018-26
https://www.futurelearn.com/courses/cultural-studies/5/steps/178911
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863100.8/warc/CC-MAIN-20180619154023-20180619174023-00003.warc.gz
en
0.971648
1,612
3.4375
3
2.88107
3
Strong reasoning
History
Alexandria, Va., USA - For the first time, researchers are using proteomics to examine proteins and peptides in saliva in order to accurately detect exposure to Zika virus. With 70 countries and territories reporting evidence of mosquito-borne Zika virus transmission, there is an increased need for a rapid and effective test for the virus. This study, published online today in the Journal of Dental Research (JDR), offers a new, quicker and more cost-effect way to test for the virus. By analyzing the saliva of a pregnant mother infected with Zika and her twins -- one born with microcephaly and one without -- the researchers were able to pinpoint the specific protein signature for Zika that is present in saliva, creating potential to use this signature as an effective way to screen for exposure. Walter Siqueira, Schulich School of Medicine & Dentistry, Western University, Canada, and a team of international researchers also discovered important clues about how the virus passes from mother to baby and its role in the development of microcephaly, a birth defect in which a baby's head and brain is smaller than expected. The research suggests a vertical transmission of the virus between mother and baby. The mutations in the amino acid sequence of the peptides that were different for each twin, suggesting that these mutations may play a role in whether or not a baby will develop microcephaly. "We are very excited to publish findings that shed light on the transmission of Zika virus and present an innovative approach to assessing the presence of Zika virus," said JDR Editor-in-Chief William Giannobile. "This research has the potential to positively impact global health. By detecting the virus, the infected individuals can have their symptoms and the virus progression properly monitored, as well as take action to stop the spread of the virus which causes these devastating craniofacial defects in newborns." Currently, the Centre for Disease Control and Prevention uses blood tests to look for changes to RNA in order to diagnose Zika. The drawback to this method is that it is only able to detect the virus up to five to seven days after exposure. Siqueira points out that because the proteins and the peptides that come directly from the virus are more stable than RNA, saliva proteomics can detect the virus far longer after exposure than with the traditional method. In this case, the window of detection was extended to nine months post-infection. The findings also open new doors for the development of antibody-based diagnostic tests for point-of-care detection. The researchers have received a provisional U.S. patent to develop a simple device that can be used to identify the Zika virus peptides in saliva outside of the laboratory. This study is accompanied with a perspective article "Could the Use of Saliva Improve the Zika Diagnosis Challenge? Contributions from a Proteomics Perspective" by IADR Regional Board Member Jaime Eduardo Castellanos, Universidad Nacional de Colombia, Bogotá. Listen to the companion podcast moderated by JDR Associate Editor Joy Richman, "Salivary Proteomics Paves the Way for New Zika Virus Diagnostics." To read the JDR special issue on orofacial pain, please visit http://journals. About the Journal of Dental Research The IADR/AADR Journal of Dental Research is a multidisciplinary journal dedicated to the dissemination of new knowledge in all sciences relevant to dentistry and the oral cavity and associated structures in health and disease. At 0.02225, the JDR holds the highest Eigenfactor® Score of all dental journals publishing original research. The JDR ranks #1in Article Influence and #2 in the Two-Year Journal Impact Factor rankings with a rating of 4.755 according to the 2016 Journal Citation Reports® (Thomas Reuters, 2017). About the International Association for Dental Research The International Association for Dental Research (IADR) is a nonprofit organization with over 10,000 individual members worldwide, dedicated to: (1) advancing research and increasing knowledge for the improvement of oral health worldwide, (2) supporting and representing the oral health research community, and (3) facilitating the communication and application of research findings. To learn more, visit http://www.
<urn:uuid:dac8bd80-5418-4e5f-8040-2e7ae9596595>
CC-MAIN-2018-51
https://eurekalert.org/pub_releases/2017-08/iaa-pio082117.php
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00319.warc.gz
en
0.921079
870
3.390625
3
2.939785
3
Strong reasoning
Health
Courtesy University of MiamiSecond-grader Sophi Bromenshenkel from Minnesota sold lemonade, hot chocolate, shark-shaped cookies, and wristbands to promote shark conservation, and become an international phenomenon. Earlier this year, 8-year old Sophi was named the 2011 "Ocean Hero" from Oceana, an international advocacy group working to protect the world’s oceans. She graces the front cover of the latest issue of Oceana Magazine. Through her efforts, $3,676.62 was raised to pay for satellite tags that are used to track movement of individual sharks, and provide insight on shark populations. In addition to providing safety information to recreational ocean users, the observations of how sharks navigate the ocean can be used to inform policymakers where to focus their marine protection efforts. The satellite-tagged sharks can be followed online from the website for the R.J. Dunlap Marine Conservation Program. Note that the Google Earth Plugin needs to be installed on your computer to view the maps. Courtesy Los Angeles Every year, my friends in California give me a Mono Lake calendar. I will be there next week to see for myself how the Lake is doing. Mono Lake is recovering after the California Water Wars. Water diversion, speculation, and fighting has been going on in California for more than 100 years. About 110 years ago some of the visionary leaders in Los Angeles decided to dig canals all the way across the state to the Sierra Nevada Mountain Range and divert and carry its water to their growing city. Because the waters were diverted, the water level in Mono Lake started to fall, and the Mono Lake ecosystem became severely impacted. In 1978, the Mono Lake Committee was formed to protect Mono Lake. The Committee (and the National Audubon Society) sued LADWP in 1979, arguing that the diversions violated the public trust doctrine, which states that navigable bodies of water must be managed for the benefit of all people. The litigation reached the California Supreme Court by 1983, which ruled in favor of the Committee. Further litigation was initiated in 1984, which claimed that LADWP did not comply with the state fishery protection laws. "In 1994, the California State Water Resources Control Board (SWRCB) established significant public trust protection and eco-system restoration standards, and LADWP was required to release water into Mono Lake to raise the lake level 20 feet. As of 2003, the water level in Mono Lake has risen 9 feet of the required 20 feet. Los Angeles made up for the lost water through state-funded conservation and recycling projects." Wikipedia I read recently that a UN Development Program report predicted that Scarcity of water, over the next 25 years, will possibly be the leading reason for major conflicts in Africa, not oil.’ [It's Blog Action Day 2010, and this year's theme is water.] Courtesy RaeA In a paper titled Increased Food and Ecosystem Security via Perennial Grains scientist state that perennial grains could be available in two decades and urge that research into perennial grains be accelerated by putting more personnel, land, and technology into breeding programs. Perennial grains have roots that reach 10 feet or deeper, reduce erosion, build soil, need less herbicide, and best of all, survive over winter so there is no need to plow, cultivate, or replant. I was just sent this link with some amazing photos of the BP oil spill. They certainly provide a vibrant visual sense of the disaster. It's not every day that I agree with the NYTimes' John Tierney. But today, I do. He offers up seven rules for a new breed of environmentalist: the "Turq." "No, that’s not a misspelling. The word is derived from Turquoise, which is Stewart Brand’s term for a new breed of environmentalist combining traditional green with a shade of blue, as in blue-sky open-minded thinking. A Turq, he hopes, will be an environmentalist guided by science, not nostalgia or technophobia." Check out the rules. Are you a Turq? Does any of Tierney's advice surprise you? Courtesy Cornelia Kopp Jon Foley, of the University of Minnesota's Institute on the Environment, has similar advice. "There are no silver bullets," he says. "But there is silver buckshot." Human activities, rather than nature, are now the driving force of change on the planet. And experts say that there will be nine billion of us on the planet by 2050. Making sure that we all have the chance to survive and thrive will require a lot of innovation, and a lot of blue-sky thinking. Who's up for the challenge? Courtesy Cornell University Library My wife often relates to friends that the Pompeii exhibit at the Science Museum Of Minnesota was her favorite. Buried in A.D. 79 by a volcano's eruption, the secrets of Pompeii remained under 20 ft of ash until discovered in 1748. Since then about two-thirds of the city has been exposed. What many people think about when you ask them about Pompeii, is a city frozen in time when it was suddenly buried. Cambridge University's Mary Beard, author of The Fires of Vesuvius: Pompeii Lost and Found says that, "The ground trembled for weeks beforehand. Only the infirm, the stupid and the optimists stayed." Rather than a city frozen in time, as scholars have described Pompeii, it was an emptied disaster scene, goods removed and doors locked, when Vesuvius covered the town with ash. What impressed me about the the Pompeii exhibit was the architecture, the interior designs, and the art objects. Pompeii was where the richest, most powerful Roman elite set up summer homes which became like stage creations, re-creating Greek art and Macedonian palaces to show off their status among their peers. What might be found under the remaining yet uncovered ruins. According to architectural historian Thomas Howe of Southwestern University in Georgetown, Texas: Still buried under Vesuvius' cooled lava are parts of both Pompeii and Herculaneum; Oplontis, a villa that might have belonged to the emperor Nero's wife; and Stabiae, a site that Howe says is "the largest concentration of excellently preserved enormous Roman villas in the entire Mediterranean world." I think it fortunate that maybe some of the best might be uncovered last. Once exposed, the "ruins quickly become ruined". Weather, weeds, tourists, and looters take a drastic tole upon the beautiful artifacts. The Italian government last year declared a state of emergency to speed preservation efforts at the 109-acre ruin. Rather than starting new excavations at Pompeii and nearby sites, Pompeii superintendent, Pietro Giovanni Guzzo, has concentrated on conservation. Thanks to Google translate, you can keep up with what is going on. The web site Blogging Pompeii is: ... for all those who work on Pompeii and the other archaeological sites of the Bay of Naples. Here we share news and information about Pompeii and the other sites, and we discuss current research. Here we share news and information about Pompeii and the other sites, and we discuss current research. Tired of being told over and over again to recycle or to buy compact fluorescent bulbs? Conserving energy and reducing waste is important, but it's not always the most exciting way to help the planet. Or is it...? This Earth Day you can combat your boredom and reduce your carbon footprint with one of these cool Do-It-Yourself projects from the website Instructables. Some are harder than others, but all of them are possible with a little time and elbow grease. Got other Earth Day project ideas? Share them here! Or better yet, upload your own instructions to the Instructables website and help other people have a fun and functional Earth Day everyday! Courtesy mickipicki For wildlife biologists, most concerns about animal populations revolve around unnatural declines. Due to things like human development, habitat loss, climate change, pollutants and diseases that make animals sick, many wildlife populations are disappearing at an alarming rate. Not surprisingly, most of the perceived problems resulting from animal population growth are coming from urban and suburban areas. Scientists are looking for ways to control the booming populations of deer, geese, pigeons and other species that have adapted to the changes humans have made to the environment. Since hunting or trapping is offensive to so many people, biologists are looking for new solutions and think that they may have found one in wildlife birth control. At the National Wildlife Research Center in Fort Collins, Colorado, biologists have developed a one-a-day contraceptive pill for geese and pigeons, and are working on a one-time injectable contraceptive for white-tailed deer. These wildlife birth control methods work on the same principal as human birth control, disrupting the animal's reproductive cycle or preventing fertilization from occurring. The whole issue of wildlife population control brings up an interesting paradox. People love animals and nature, or at least, they love the idea of animals and nature as portrayed by the folks at Disney. People also love their yards and gardens, their pets and cars and airplanes, all of which provide ample opportunity for conflict with our furry and feathered friends. It's worth remembering that many of the animals we consider pests today were once hunted to near extinction, and that it was the efforts of conservation biologists, along with hunters and fisherpeople, that helped to bring back many of these iconic species. So, is birth control for Bambi really the answer? I'm not sure, though I do have lots of questions, including whether this kind of animal birth control will contribute to the already harmful effects that hormones found in human birth control are having on the environment. Source: Popular Science Courtesy Monterey Bay AquariumI ran into some folks from the Monterey Bay Aquarium today and learned about their Seafood Watch program - its impressive, and helpful. Their website contains the most current information on sustainable seafood choices available in different regions of the U.S. The guides can both be viewed on line and also can be downloaded in a pocket-size version. The site contains a lot of other useful information on how you can be an advocate for ocean conservation, as well as background on what the conservation issues are that face our oceans.
<urn:uuid:7095666a-d365-46f0-960a-d3b3ba1ecf54>
CC-MAIN-2016-07
http://www.sciencebuzz.org/buzz_tags/conservation
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146600.56/warc/CC-MAIN-20160205193906-00347-ip-10-236-182-209.ec2.internal.warc.gz
en
0.947972
2,139
2.953125
3
2.91687
3
Strong reasoning
Science & Tech.
Devolution in the United Kingdom In the United Kingdom, devolution (Scottish Gaelic: fèin-riaghlaidh, Welsh: datganoli; Irish: Dílárú) refers to the statutory granting of powers from the Parliament of the United Kingdom to the Scottish Parliament, the National Assembly for Wales, the Northern Ireland Assembly and the London Assembly and to their associated executive bodies the Scottish Government, the Welsh Government, the Northern Ireland Executive and in England, the Greater London Authority and combined authorities. Devolution differs from federalism in that the devolved powers of the subnational authority ultimately reside in central government, thus the state remains, de jure, a unitary state. Legislation creating devolved parliaments or assemblies can be repealed or amended by central government in the same way as any statute. Irish home ruleEdit Earlier in the 19th century, Irish politicians like Daniel O'Connell had demanded a repeal of the Act of Union 1800 and a return to two separate kingdoms and parliaments, united only in the personal union of the monarch of Great Britain and Ireland. In contrast to this, demands for home rule called for autonomy for Ireland within the United Kingdom, with a subsidiary Irish parliament subject to the authority of the parliament at Westminster. This issue was first introduced by the Irish Parliamentary Party led by Isaac Butt, William Shaw and Charles Stewart Parnell. Over the course of four decades, four Irish Home Rule Bills were introduced into the British Parliament: - the First Home Rule Bill was introduced in 1886 by Prime Minister William Ewart Gladstone. Following intense opposition in Ulster and the departure of Unionists from Gladstone's Liberal Party, the bill was defeated in the House of Commons. - the Second Home Rule Bill was introduced in 1893 by Prime Minister Gladstone and passed the Commons but was rejected in the House of Lords. - the Third Home Rule Bill was introduced in 1912 by Prime Minister H. H. Asquith based on an agreement with the Irish Parliamentary Party. After a prolonged parliamentary struggle was passed under the provisions of the Parliament Act of 1911, under which the Commons overruled the veto by the Lords. Again, this bill was fiercely opposed by Ulster Unionists who raised the Ulster Volunteers and signed the Ulster Covenant to oppose the bill, thereby raising the spectre of civil war. The act received royal assent (with restrictions in regard to Ulster) shortly after the outbreak of World War I but implementation was suspended until after the war's conclusion. Attempts at implementation failed in 1916 and 1917 and the subsequent Irish War of Independence (1919–22) resulted in it never coming into force. - The Fourth Home Rule Bill was introduced in 1920 by Prime Minister David Lloyd George and passed both houses of parliament. It divided Ireland into Northern Ireland (six counties) and Southern Ireland (twenty-six counties), which each had their own parliament and judiciary but which also shared some common institutions. The Act was implemented in Northern Ireland, where it served as the basis of government until its suspension in 1972 following the outbreak of the Troubles. The southern parliament convened only once and in 1922, under the Anglo-Irish Treaty, Southern Ireland became the Irish Free State, a dominion within the British Empire, and declared fully sovereign in 1937 (see Republic of Ireland). Home Rule came into effect for Northern Ireland in 1921 under the Fourth Home Rule Act. The Parliament of Northern Ireland established under that act was prorogued (the session ended) on 30 March 1972 owing to the destabilisation of Northern Ireland upon the onset of the Troubles in late 1960s. This followed escalating violence by state and paramilitary organisations following the suppression of civil rights demands by Northern Ireland Catholics. The Northern Ireland Parliament was abolished by the Northern Ireland Constitution Act 1973, which received royal assent on 19 July 1973. A Northern Ireland Assembly was elected on 28 June 1973 and following the Sunningdale Agreement, a power-sharing Northern Ireland Executive was formed on 1 January 1974. This collapsed on 28 May 1974, due to the Ulster Workers' Council strike. The Troubles continued. The Northern Ireland Constitutional Convention (1975–1976) and second Northern Ireland Assembly (1982–1986) were unsuccessful at restoring devolution. In the absence of devolution and power-sharing, the UK Government and Irish Government formally agreed to co-operate on security, justice and political progress in the Anglo-Irish Agreement, signed on 15 November 1985. More progress was made after the ceasefires by the Provisional IRA in 1994 and 1997. The 1998 Belfast Agreement (also known as the Good Friday Agreement), resulted in the creation of a new Northern Ireland Assembly, intended to bring together the two communities (nationalist and unionist) to govern Northern Ireland. Additionally, renewed devolution in Northern Ireland was conditional on co-operation between the newly established Northern Ireland Executive and the Government of Ireland through a new all-Ireland body, the North/South Ministerial Council. A British-Irish Council covering the whole British Isles and a British-Irish Intergovernmental Conference (between the British and Irish Governments) were also established. From 15 October 2002, the Northern Ireland Assembly was suspended due to a breakdown in the Northern Ireland peace process but, on 13 October 2006, the British and Irish governments announced the St Andrews Agreement, a 'road map' to restore devolution to Northern Ireland. On 26 March 2007, Democratic Unionist Party (DUP) leader Ian Paisley met Sinn Féin leader Gerry Adams for the first time and together announced that a devolved government would be returning to Northern Ireland. The Executive was restored on 8 May 2007. Several policing and justice powers were transferred to the Assembly on 12 April 2010. The 2007–2011 Assembly (the third since the 1998 Agreement) was dissolved on 24 March 2011 in preparation for an election to be held on Thursday 5 May 2011, this being the first Assembly since the Good Friday Agreement to complete a full term. The fifth Assembly convened in May 2016. That assembly dissolved on 26 January 2017, and a fresh election for a reduced Assembly was held on 2 March 2017. Ever since the Parliament of Scotland adjourned in 1707 as a result of the Acts of Union, individuals and organisations had advocated the return of a Scottish Parliament. The drive for home rule first took concrete shape in the 19th century, as demands for it in Ireland were met with similar (although not as widespread) demands in Scotland. The National Association for the Vindication of Scottish Rights was established in 1853, a body close to the Tories and motivated by a desire to secure more focus on Scottish problems in response to what they felt was undue attention being focused on Ireland by the then Liberal government. In 1871, William Ewart Gladstone stated at a meeting held in Aberdeen that if Ireland was to be granted home rule, then the same should apply to Scotland. A Scottish home rule bill was presented to the Westminster Parliament in 1913 but the legislative process was interrupted by the First World War. The demands for political change in the way in which Scotland was run changed dramatically in the 1920s when Scottish nationalists started to form various organisations. The Scots National League was formed in 1920 in favour of Scottish independence, and this movement was superseded in 1928 by the formation of the National Party of Scotland, which became the Scottish National Party (SNP) in 1934. At first the SNP sought only the establishment of a devolved Scottish assembly, but in 1942 they changed this to support all-out independence. This caused the resignation of John MacCormick from the SNP and he formed the Scottish Covenant Association. This body proved to be the biggest mover in favour of the formation of a Scottish assembly, collecting over two million signatures in the late 1940s and early 1950s and attracting support from across the political spectrum. However, without formal links to any of the political parties it withered, devolution and the establishment of an assembly were put on the political back burner. Harold Wilson's Labour government set up a Royal Commission on the Constitution in 1969, which reported in 1973 to Ted Heath's Conservative government. The Commission recommended the formation of a devolved Scottish assembly, but was not implemented. Support for the SNP reached 30% in the October, 1974 general election, with 11 SNP MPs being elected. In 1978 the Labour government passed the Scotland Act which legislated for the establishment of a Scottish Assembly, provided the Scots voted for such in a referendum. However, the Labour Party was bitterly divided on the subject of devolution. An amendment to the Scotland Act that had been proposed by Labour MP George Cunningham, who shortly afterwards defected to the newly formed Social Democratic Party (SDP), required 40% of the total electorate to vote in favour of an assembly. Despite officially favouring it, considerable numbers of Labour members opposed the establishment of an assembly. This division contributed to only a narrow 'Yes' majority being obtained, and the failure to reach Cunningham's 40% threshold. The 18 years of Conservative government, under Margaret Thatcher and then John Major, saw strong resistance to any proposal for devolution for Scotland, and for Wales. In response to Conservative dominance, in 1989 the Scottish Constitutional Convention was formed encompassing the Labour Party, Liberal Democrats and the Scottish Green Party, local authorities, and sections of "civic Scotland" like Scottish Trades Union Congress, the Small Business Federation and Church of Scotland and the other major churches in Scotland. Its purpose was to devise a scheme for the formation of a devolution settlement for Scotland. The SNP decided to withdraw as independence was not a constitutional option countenanced by the convention. The convention produced its final report in 1995. In May 1997, the Labour government of Tony Blair was elected with a promise of creating devolved institutions in Scotland. In late 1997, a referendum was held which resulted in a "yes" vote. The newly created Scottish Parliament (as a result of the Scotland Act 1998) has powers to make primary legislation in all areas of policy which are not expressly 'reserved' for the UK Government and parliament such a national defence and international affairs. Devolution for Scotland was justified on the basis that it would make government more representative of the people of Scotland. It was argued that the population of Scotland felt detached from the Westminster government (largely because of the policies of the Conservative governments led by Margaret Thatcher and John Major). A referendum on Scottish independence was held on 18 September 2014, with the referendum being defeated 44.7% (Yes) to 55.3% (No). In the 2015 UK General Election the SNP won 56 of the 59 Scottish seats with 50% of all Scottish votes. This saw the SNP replace the Liberal Democrats as the third largest in the UK Parliament. In the 2016 Scottish Parliament election the SNP fell 2 seats short of an overall majority with 63 seats but remained in government for a third term. The Conservative Party won 31 seats and became the second largest party for the first time. The Labour Party down to 24 seats from a 38 fell to third place. The Scottish Greens took 6 seat and overtook the Liberal Democrats who remained flat on 5 seats. Following the 2016 referendum on EU membership, where Scotland and Northern Ireland voted to Remain and England and Wales voted to Leave (leading to a 52% Leave vote nationwide), the Scottish Parliament voted for a second independence referendum to be held once conditions of the UK's EU exit are known. Conservative Prime Minister Theresa May to date has rejected this request citing a need to focus on EU exit negotiations, and then moved Parliament to allow a general election to be held on 8 June 2017. After the Laws in Wales Acts 1535–1542, Wales was treated in legal terms as part of England. However, during the later part of the 19th century and early part of the 20th century the notion of a distinctive Welsh polity gained credence. In 1881 the Sunday Closing (Wales) Act 1881 was passed, the first such legislation exclusively concerned with Wales. The Central Welsh Board was established in 1896 to inspect the grammar schools set up under the Welsh Intermediate Education Act 1889, and a separate Welsh Department of the Board of Education was formed in 1907. The Agricultural Council for Wales was set up in 1912, and the Ministry of Agriculture and Fisheries had its own Welsh Office from 1919. Despite the failure of popular political movements such as Cymru Fydd, a number of institutions, such as the National Eisteddfod (1861), the University of Wales (Prifysgol Cymru) (1893), the National Library of Wales (Llyfrgell Genedlaethol Cymru) (1911) and the Welsh Guards (Gwarchodlu Cymreig) (1915) were created. The campaign for disestablishment of the Anglican Church in Wales, achieved by the passage of the Welsh Church Act 1914, was also significant in the development of Welsh political consciousness. Plaid Cymru was formed in 1925 with the goal of securing a Welsh-speaking Wales but initially its growth was slow and it gained few votes at parliamentary elections. An appointed Council for Wales and Monmouthshire was established in 1949 to "ensure the government is adequately informed of the impact of government activities on the general life of the people of Wales". The council had 27 members nominated by local authorities in Wales, the University of Wales, National Eisteddfod Council and the Welsh Tourist Board. A cross-party Parliament for Wales campaign in the early 1950s was supported by a number of Labour MPs, mainly from the more Welsh-speaking areas, together with the Liberal Party and Plaid Cymru. A post of Minister of Welsh Affairs was created in 1951 and the post of Secretary of State for Wales and the Welsh Office were established in 1964 leading to the abolition of the Council for Wales and Monmouthshire. Labour's incremental embrace of a distinctive Welsh polity was arguably catalysed in 1966 when Plaid Cymru president Gwynfor Evans won the Carmarthen by-election. In response to the emergence of Plaid Cymru and the Scottish National Party (SNP) Harold Wilson's Labour Government set up the Royal Commission on the Constitution (the Kilbrandon Commission) to investigate the UK’s constitutional arrangements in 1969. The 1974–79 Labour Government proposed a Welsh Assembly in parallel to its proposals for Scotland. These were rejected by voters in the Wales referendum, 1979, with 956,330 votes against, compared with 243,048 for. In May 1997, the Labour government of Tony Blair was elected with a promise of creating a devolved assembly in Wales; the Wales referendum, 1997, resulted in a "yes" vote. The National Assembly for Wales, as a consequence of the Government of Wales Act 1998, possesses the power to determine how the government budget for Wales is spent and administered. The 1998 Act was followed by the Government of Wales Act 2006 which created an executive body, the Welsh Assembly Government, separate from the legislature, the National Assembly for Wales. It also conferred on the National Assembly some limited legislative powers. In Wales the 1997 referendum on devolution was only narrowly passed, and most voters rejected devolution in all the counties bordering England, as well as Cardiff and Pembrokeshire. However, all recent opinion polls indicate an increasing level of support for further devolution, with support for some tax varying powers now commanding a majority, and diminishing support for abolition of the Assembly. A March 2011 referendum in Wales saw a majority of 21 local authority constituencies to 1 voting in favour of more legislative powers being transferred from the UK parliament in Westminster to the Welsh Assembly. The turnout was 35.4% with 517,132 votes (63.49%) in favour and 297,380 (36.51%) against increased legislative power. A Commission on Devolution in Wales was set up in October 2011 to consider further devolution of powers from London. The commission issued a report on the devolution of fiscal powers in November 2012 and a report on the devolution of legislative powers in March 2014. The fiscal recommendations formed the basis of the Wales Act 2014, while majority of the legislative recommendations were put into law by the Wales Act 2017. England is the only country of the United Kingdom to not have a devolved Parliament or Assembly and English affairs are decided by the Westminster Parliament. Devolution for England was proposed in 1912 by the Member of Parliament for Dundee, Winston Churchill, as part of the debate on Home Rule for Ireland. In a speech in Dundee on 12 September, Churchill proposed that the government of England should be divided up among regional parliaments, with power devolved to areas such as Lancashire, Yorkshire, the Midlands and London as part of a federal system of government. The division of England into provinces or regions was explored by several post-Second World War royal commissions. The Redcliffe-Maud Report of 1969 proposed devolving power from central government to eight provinces in England. In 1973 the Royal Commission on the Constitution (United Kingdom) proposed the creation of eight English appointed regional assemblies with an advisory role; although the report stopped short of recommending legislative devolution to England, a minority of signatories wrote a memorandum of dissent which put forward proposals for devolving power to elected assemblies for Scotland, Wales and five Regional Assemblies in England. In April 1994 the Government of John Major created a set of ten Government Office Regions for England to coordinate central government departments at a provincial level. English Regional Development Agencies were set up in 1998 under the Government of Tony Blair to foster economic growth around England. These Agencies were supported by a set of eight newly created Regional Assemblies, or Chambers. These bodies were not directly elected but members were appointed by local government and local interest groups. English Regional Assemblies were abolished between 2008 and 2010, but proposals to replace them were put forward. Following devolution of power to Scotland, Wales and Northern Ireland in 1998, the government proposed similar decentralisation of power across England. Following a referendum in 1998, a directly elected administrative body was created for Greater London, the Greater London Authority. Proposals to devolve political power to fully elected bodies English Regional Assemblies was put to public vote in the Northern England devolution referendums, 2004. Originally three referendums were planned, but following a decisive rejection of the plans by voters in North East England, further referendums were abandoned. Although moves towards English regional devolution were called off, the Regions of England continue to be used in certain governmental administrative functions. There have been proposals for the establishment of a single devolved English Parliament to govern the affairs of England as a whole. This has been supported by groups such as English Commonwealth, the English Democrats and Campaign for an English Parliament, as well as the Scottish National Party and Plaid Cymru who have both expressed support for greater autonomy for all four nations while ultimately striving for a dissolution of the Union. Without its own devolved Parliament, England continues to be governed and legislated for by the UK Government and UK Parliament which gives rise to the West Lothian question. The question concerns the fact that, on devolved matters, Scottish MPs continue to help make laws that apply to England alone, although no English MPs can make laws on those same matters for Scotland. Since the Scottish independence referendum, 2014 there has been a wider debate about the UK adopting a federal system with each of the four home nations having its own, equal devolved legislatures and law-making powers. In the first five years of devolution for Scotland and Wales, support in England for the establishment of an English parliament was low at between 16 and 19 per cent. While a 2007 opinion poll found that 61 per cent would support such a parliament being established, a report based on the British Social Attitudes Survey published in December 2010 suggests that only 29 per cent of people in England support the establishment of an English parliament, though this figure has risen from 17 per cent in 2007. John Curtice argues that tentative signs of increased support for an English parliament might represent "a form of English nationalism...beginning to emerge among the general public". Krishan Kumar, however, notes that support for measures to ensure that only English MPs can vote on legislation that applies only to England is generally higher than that for the establishment of an English parliament, although support for both varies depending on the timing of the opinion poll and the wording of the question. In September 2011 it was announced that the British government was to set up a commission to examine the West Lothian question. In January 2012 it was announced that this six-member commission would be named the Commission on the consequences of devolution for the House of Commons, would be chaired by former Clerk of the House of Commons, Sir William McKay, and would have one member from each of the devolved countries. The McKay Commission reported in March 2013. English votes for English lawsEdit On 22 October 2015 The House of Commons voted in favour of implementing a system of "English votes for English laws" by 312 votes to 270 after four hours of intense debate. Amendments to the proposed standing orders put forward by both Labour and The Liberal Democrats were defeated. Scottish National Party MPs criticized the measures stating that the bill would render Scottish MPs as "second class citizens". Under the new procedures, if the Speaker of The House determines if a proposed bill or statutory instrument exclusively affects England, England and Wales or England, Wales and Northern Ireland, then legislative consent should be obtained via a Legislative Grand Committee. This process will be performed at the second reading of a bill or instrument and is currently undergoing a trial period, as an attempt at answering the West Lothian question. There is a movement that supports devolution in Cornwall. A law-making Cornish Assembly is party policy for the Liberal Democrats, Mebyon Kernow and the Greens. A Cornish Constitutional Convention was set up in 2001 with the goal of establishing a Cornish Assembly. Several Cornish Liberal Democrat MPs such as Andrew George, Dan Rogerson and former MP Matthew Taylor are strong supporters of Cornish devolution. On 12 December 2001, the Cornish Constitutional Convention and Mebyon Kernow submitted a 50,000-strong petition supporting devolution in Cornwall to 10 Downing Street. This was over 10% of the Cornish electorate, the figure that the government had stated was the criteria for calling a referendum on the issue. In December 2007 Cornwall Council leader David Whalley stated that "There is something inevitable about the journey to a Cornish Assembly". A poll carried out by Survation for the University of Exeter in November 2014 found that 60% were in favour of power being devolved from Westminster to Cornwall, with only 19% opposed and 49% were in favour of the creation of a Cornish Assembly, with 31% opposed. In January 2015 Labour's Shadow Chancellor promised the delivery of a Cornish assembly in the next parliament if Labour are elected. Ed Balls made the statement whilst on a visit to Cornwall College in Camborne and it signifies a turn around in policy for the Labour party who in government prior to 2010 voted against the Government of Cornwall Bill 2008-09. The Yorkshire Devolution Movement is an all party and no party campaign group, while the Yorkshire Party is a political party. Both campaign for devolution to Yorkshire, which has a population of 5.4 million - similar to Scotland - and whose economy is roughly twice as large as that of Wales. Arguments for devolution to Yorkshire focus on the area as a cultural region or even a nation separate from England, whose inhabitants share common features. In the European Parliament election in 2014, Yorkshire First attained 1.47% of the vote (19,017 total votes). Northern England (as whole)Edit The Northern Party seeks to establish a Regional Government for the North of England covering the six historic counties of the region. The Campaign aims to create a Northern Government with tax-raising powers and responsibility for policy areas including economic development, education, health, policing and emergency services. In 2004, a referendum on devolution for North East England took place, in which devolution was defeated 78% to 22%. The legislatures of the Crown dependencies are not devolved as their origins predate the establishment of the United Kingdom and their attachment to the British Crown, and the Crown Dependencies are not part of the United Kingdom. However, the United Kingdom has redefined its formal relationship with the Crown Dependencies since the late 20th century. Crown dependencies are possessions of the British Crown, as opposed to overseas territories or colonies of the United Kingdom. They comprise the Channel Island bailiwicks of Jersey and Guernsey, and the Isle of Man in the Irish Sea. For several hundred years, each has had its own separate legislature, government and judicial system. However, as possessions of the Crown they are not sovereign nations in their own right and the British Government is responsible for the overall good governance of the islands and represents the islands in international law. Acts of the UK Parliament are normally only extended to the islands only with their specific consent. Each of the islands is represented on the British-Irish Council. The Lord Chancellor, a post in the UK Government, is responsible for relations between the government and the Channel Islands. All insular legislation must be approved by the Queen in Council and the Lord Chancellor is responsible for proposing the legislation on the Privy Council. He can refuse to propose insular legislation or can propose it for the Queen's approval. In 2007–2008, each Crown Dependency and the UK signed agreements that established frameworks for the development of the international identity of each Crown Dependency. Among the points clarified in the agreements were that: - the UK has no democratic accountability in and for the Crown Dependencies which are governed by their own democratically elected assemblies; - the UK will not act internationally on behalf of the Crown Dependencies without prior consultation; - each Crown Dependency has an international identity that is different from that of the UK; - the UK supports the principle of each Crown Dependency further developing its international identity; - the UK recognises that the interests of each Crown Dependency may differ from those of the UK, and the UK will seek to represent any differing interests when acting in an international capacity; and - the UK and each Crown Dependency will work together to resolve or clarify any differences that may arise between their respective interests. Jersey has moved further than the other two Crown dependencies in asserting its autonomy from the United Kingdom. The preamble to the States of Jersey Law 2005 declares that 'it is recognized that Jersey has autonomous capacity in domestic affairs' and 'it is further recognized that there is an increasing need for Jersey to participate in matters of international affairs'. In July 2005, the Policy and Resources Committee of the States of Jersey established the Constitutional Review Group, chaired by Sir Philip Bailhache, with terms of reference 'to conduct a review and evaluation of the potential advantages and disadvantages for Jersey in seeking independence from the United Kingdom or other incremental change in the constitutional relationship, while retaining the Queen as Head of State'. The Group's 'Second Interim Report' was presented to the States by the Council of Ministers in June 2008. In January 2011, one of Jersey's Council of Ministers was for the first time designated as having responsibility for external relations and is often described as the island's 'minister'. Proposals for Jersey independence have not, however, gained significant political or popular support. In October 2012 the Council of Ministers issued a "Common policy for external relations" that set out a number of principles for the conduct of external relations in accordance with existing undertakings and agreements. This document noted that Jersey "is a self-governing, democratic country with the power of self-determination" and "that it is not Government policy to seek independence from the United Kingdom, but rather to ensure that Jersey is prepared if it were in the best interests of Islanders to do so". On the basis of the established principles the Council of Ministers decided to "ensure that Jersey is prepared for external change that may affect the Island’s formal relationship with the United Kingdom and/or European Union". There is also public debate in Guernsey about the possibility of independence. In 2009, however, an official group reached the provisional view that becoming a microstate would be undesirable and it is not supported by Guernsey's Chief Minister. In 2010, the governments of Jersey and Guernsey jointly created the post of director of European affairs, based in Brussels, to represent the interests of the islands to European Union policy-makers. Since 2010 the Lieutenant Governors of each Crown dependency have been recommended to the Crown by a panel in each respective Crown dependency; this replaced the previous system of the appointments being made by the Crown on the recommendation of UK ministers. Competences of devolved administrationsEdit Northern Ireland, Scotland & Wales enjoy different levels of legislative, administrative and budgetary autonomy. The table shows the areas and degree of autonomy and budgetary independence. Exclusive means that the devolved administration has exclusive powers in this policy area. Shared means that some areas of policy in the specific area are not under the control of the devolved administration. For example, while policing and criminal law may be a competence of the Scottish Government, the UK Government remains responsible for anti-terrorism and coordinates serious crime through the NCA. |LAW AND ORDER| |Local Administration Organisation & Finance||Exclusive||Exclusive||Exclusive| |SOCIAL & HEALTH POLICY| |Public Pensions (devolved administration)||Shared||Shared| |Pensions & Child Support||Parity| |Social Services ( Housing & Student Support)||Exclusive||Exclusive||Exclusive| |food safety and standards||Exclusive||Exclusive||Exclusive| |ECONOMY, ENVIRONMENT & TRANSPORT| |Agricult., Forestary & Fisheries||Exclusive||Exclusive||Exclusive| |CULTURE & EDUCATION| |Primary & Secondary Education||Exclusive||Exclusive||Exclusive| |University & Professional Education||Exclusive||Exclusive||Exclusive| |Sport & recreation||Exclusive||Exclusive||Exclusive| |RESOURCES & SPENDING| |Own Tax resources||Yes||Yes||No| |Allocation by UK Government||Barnett Formula||Barnett Formula||Barnett Formula| |Other resources||Co-payments (Health & education)||Co-payments (Health & education)||Co-payments (Health & education)| |Resources||0% own resources||0% own resources||0% own resources| |Devolved Spending as % of total public spending||63%||60%||50%| - List of current heads of government in the United Kingdom and dependencies - Federalism in the United Kingdom - Joint Ministerial Committee (United Kingdom) - Unionism in the United Kingdom - Devolution in Scotland - Scottish independence - Welsh independence - Constitutional status of Cornwall - Constitutional status of Orkney, Shetland and the Western Isles - "IRA Ceasefire". The Search for Peace. BBC. 2005. Retrieved 25 October 2011. - Alvin Jackson, Home Rule, an Irish History 1800–2000, (2003), ISBN 0-7538-1767-5 - March target date for devolution, BBC News Online, 13 October 2006 - "NI deal struck in historic talks". bbc.co.uk. 26 March 2007. - "Historic return for NI Assembly". bbc.co.uk. 8 May 2007. - "Ian Paisley retires as NI Assembly completes historical first full term". BBC News. 25 March 2011. - Archived 6 August 2011 at the Wayback Machine. - "Local Parliaments For England. Mr. Churchill's Outline Of A Federal System, Ten Or Twelve Legislatures". The Times. 13 September 1912. p. 4. - "Mr. Winston Churchill's speech at Dundee". The Spectator: 2. 14 September 1912. Retrieved 20 September 2014. - Smith, David M.; Wistrich, Enid (2014). Devolution and localism in England. p. 6. ISBN 9781472430793. Retrieved 22 September 2014. - Audretsch, David B.; Bonser, Charles F., eds. (2002). Globalization and regionalization : challenges for public policy. Boston: Kluwer Academic Publishers. pp. 25–28. ISBN 9780792375524. Retrieved 22 September 2014. - House of Commons Justice Committee (2009). Devolution : a decade on. London: TSO. pp. 62–63. ISBN 9780215530387. Retrieved 22 September 2014. - Jones, Bill; Norton, Philip, eds. (2004). Politics UK. Routledge. p. 238. ISBN 9781317581031. Retrieved 22 September 2014. - Williams, Shirley (16 September 2014). "How Scotland could lead the way towards a federal UK". The Guardian. Retrieved 20 September 2014. - Hazell, Robert (2006). "The English Question". Publius. 36 (1): 37–56. doi:10.1093/publius/pjj012. - Carrell, Severin (16 January 2007). "Poll shows support for English parliament". The Guardian. London. Retrieved 9 February 2011. - Ormston, Rachel; Curtice, John (December 2010). "Resentment or contentment? Attitudes towards the Union ten years on" (PDF). National Centre for Social Research. Retrieved 9 February 2011. - Curtice, John (February 2010). "Is an English backlash emerging? Reactions to devolution ten years on". Institute for Public Policy Research. p. 3. Retrieved 9 February 2011.[permanent dead link] - Kumar, Krishan (2010). "Negotiating English identity: Englishness, Britishness and the future of the United Kingdom". Nations and Nationalism. 16 (3): 469–487. doi:10.1111/j.1469-8129.2010.00442.x. - "Answer sought to the West Lothian question". BBC News Scotland. 8 September 2011. Retrieved 8 September 2011. - BBC News, England-only laws 'need majority from English MPs' , 25 March 2013. Retrieved 25 March 2013 - Sheehy, James (22 October 2015). "Scottish MPs denounce Bill regarding devolution powers". The Independent. - James, Sheehy (22 October 2015). "English Votes For English Laws". The BBC. - Demianyk, Graham (10 March 2014), "Liberal Democrats vote for Cornish Assembly", Western Morning News, retrieved 20 September 2014 - Green Party of England and Wales (2 May 2014), Green Party leader reaffirms support for Cornish Assembly, retrieved 20 September 2014 - Andrew George MP, Press release regarding Cornish devolution October 2007 Archived 25 December 2007 at the Wayback Machine. - Cornish Constitutional Convention. "The Cornish Constitutional Convention". Cornishassembly.org. Retrieved 2012-10-09. - "11th December 2001– Government gets Cornish assembly call". BBC News. 2001-12-11. Retrieved 2012-10-09. - Great Britain: Parliament: House of Commons: ODPM: Housing, Planning, Local Government and the Regions Committee, Draft Regional Assemblies Bill, The Stationery Office, 2004. - Cornish Constitutional Convention. "Cornwall Council leader supports Cornish devolution". Cornishassembly.org. Retrieved 2012-10-09. - Demianyk, G (27 November 2014). "South West councils make devolution pitch as Scotland gets income tax powers". Western Morning News. Retrieved 28 November 2014. - "Labour's Devolution Pledge For Cornwall". Pirate FM. UKRD. 23 January 2015. Retrieved 23 January 2015. - "Yorkshire could be ‘God’s Own Country’, says Leeds professor", Yorkshire Evening Post, 12 September 2014 - BBC News, Yorkshire and the Humber (European Parliament constituency) - Rosthorn, Andrew (1 October 2014). "Campaign for the North wants the lost kingdom of Eric Bloodaxe". Tribune. - White, Steve (1 October 2014). "Viking referendum demands a Northern state based on kingdom of Erik Bloodaxe". Daily Mirror. Trinity Mirror. - Blackhurst, Chris (2 October 2014). "It's not because I'm sentimental about the North that I believe it needs devolved powers". The Independent. Independent Print Ltd. - Staff writer (6 October 2014). "Ex-MP pushes for northern parliament". The Gazette. Johnston Press. - "Crown Dependencies, 8th Report of 2009–10, HC 56-1". House of Commons Justice Select Committee. 23 March 2010. - In relation to Jersey, see "Jersey law course". Institute of Law Jersey. Archived from the original on 28 December 2013. - States of Jersey Law 2005 (PDF). States of Jersey. Archived from the original (PDF) on 23 May 2012. - Second Interim Report of the Constitution Review Group (PDF). States of Jersey. 27 June 2008. - "Meet our new foreign minister". Jersey Evening Post. 14 January 2011. Archived from the original on 17 January 2011. - "Editorial: A new role of great importance". Jersey Evening Post. 17 January 2011. Archived from the original on 22 January 2011. - "Editorial: Legal ideas of political importance". Jersey Evening Post. 21 September 2010. Archived from the original on 2 July 2011. - Sibcy, Andy (17 September 2010). "Sovereignty or dependency on agenda at conference". Jersey Evening Post. Archived from the original on 2 July 2011. - "Common policy for external relations" (PDF). States of Jersey. Retrieved 8 December 2012. - Ogier, Thom (27 October 2009). "Independence—UK always willing to talk". Guernsey Evening Press. Archived from the original on 18 March 2012. - Prouteaux, Juliet (23 October 2009). "It IS time to loosen our ties with the UK". Guernsey Evening Press. Archived from the original on 28 October 2009. - Ogier, Thom (13 October 2009). "Full independence would frighten away investors and firms". Guernsey Evening Press. Archived from the original on 20 October 2009. - Tostevin, Simon (9 July 2008). "Independence? Islanders don't want it, says Trott". Guernsey Evening Press. Archived from the original on 22 November 2008. - Staff writer (27 January 2011). "Channel Islands' "man in Europe" appointed". Jersey Evening Post. Archived from the original on 2 February 2011. - Staff writer (6 July 2010). "£105,000 – the tax-free reward for being a royal rep". Jersey Evening Post. Archived from the original on 16 August 2011. Retrieved 29 July 2013. - Ogier, Thom (3 July 2010). "Guernsey will choose its next Lt-Governor". Guernsey Evening Press. Archived from the original on 13 August 2011. Retrieved 29 July 2013. - "The current Welsh devolution settlement" (PDF). commissionondevolutioninwales. Commission on Devolution in Wales. Archived from the original (pdf) on 28 February 2016. - Understanding "parity": departmental briefing paper (pdf). Northern Ireland Assembly. - Bogdanor, Vernon (2001). Devolution in the United Kingdom. Oxford New York: Oxford University Press. ISBN 9780192801289. - Swenden, Wilfried; McEwen, Nicola (July–September 2014). "UK devolution in the shadow of hierarchy? Intergovernmental relations and party politics". Comparative European Politics. Palgrave Macmillan. 12 (4–5): 488–509. doi:10.1057/cep.2014.14. - Blick, Andrew (2015). 'Magna Carta and contemporary constitutional change'. History & Policy. http://www.historyandpolicy.org/policy-papers/papers/magna-carta-and-contemporary-constitutional-change
<urn:uuid:8c1be3be-86bc-40c1-8b97-cb3205741b56>
CC-MAIN-2018-05
https://en.m.wikipedia.org/wiki/Devolution_in_the_United_Kingdom
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887065.16/warc/CC-MAIN-20180118032119-20180118052119-00502.warc.gz
en
0.944417
8,319
3.703125
4
3.070815
3
Strong reasoning
Politics
It is a cooking in plenty of liquids usually in water or in stocks. Bubbles can be seen on the surface of the pan or in the pot in which food are placed and then should roll to cooked thoroughly. The pan should be suitable size as per the volume of food to be boiled. There are two stages of boiling and they are; Commonly, a vegetable grown above the ground are cooked in boiling salted water and vegetable grown under the ground are started in cold salted water with the exception of new potatoes and new carrots. The dry vegetable is started in cold water. Salt is added only after the vegetables get tender. Fish should be cooked in hot liquid and should be allowed to simmer. It is a cooking in a small amount of liquid without bubbling below boiling point 93 degree Celsius to 95 degree Celsius. Below boiling point. The example of poaching is poached fruits, poached eggs etc. The food to be cooked is surrounded by plenty of steam from fast boiling water directly or by having the food in a basin or other dish placed in steam or boiling water. This is a slow process of cooking and only easily cooked food can be prepared by this method. It is cooking by moist heat (water vapor) by direct or indirect steaming. Indirect steaming is done when food is placed in a closed pan which is surrounded by plenty of steam from fast boiling water or steamer. Direct streaming is done by placing food in a pressure steamer. Advantage of steaming This is a very gentle method of cooking in a closed pan using only small quantity of liquid. The food should never be more than half covered with the liquid and the food above the level is cooked by steam. As liquid is not allowed to boil during cooking, the process is a slow one. It is a gentle method of cooking. For example; stewed lamb, stewed pork, etc. General rules of stewing Advantage of stewing This combined method of roasting and sewing in a pan with a tight fitting lid. For correct use of these method requires a special pan but a casserole dish or stew pan makes a good substitute. The meat should be sealed by browning on all sides and then placed on a lightly fired bed of mirepoix. Stock or gravy is added which should come two-third of the meat to be braised. The flavoring and seasonings are then added. The lid is put on and it is allowed to cook gently on the stove or in the oven. When nearly done, the lid is removed and the joint is frequently basted to graze it. Examples of braising are braised lamb chops, braised veal shanks, etc Broiling is cooking by direct heat with the aid of very little fat and is used synonymously with grilling. In the pan broiling, the food is cooked uncovered on hot metal such as a grill or a frying pan. Excess fat accumulated while cooking should be poured off. Broiled foods are good for diet conscious people as the amount of the fat present is very low. Grilling can be synonymous to broiling. The food to be grilled is supported on the iron grids over the fire or on a grid placed in a tin under a gas or electric grill, or between electrically heated grill bars. There are basically 3types of grilling. Cooking food on greased grill bars with the help of fast over direct heat is known to be over heat grilling. Only first class cut meat, poultry, and certain fish can be prepared this way. the grill bar is brushed with oil to prevent food sticking, and can be heated by charcoal, gas electricity. The bars should char the food on both sides to give a distinctive flavor of grilling. The thickness of the food and the heat of grill determine the cooking time. Grills are typicala`lacarte` (French choice menu) dishes and are ordered by the customer to the degree of cooking required , such as rare, medium rare, medium or well done. Cooking on grill bars or on trays under direct heat is known as under heat grilling. Steaks and chops are cooked on the bars but fish, tomato, bacon and mushroom are usually cooked on trays. This method can also be used in the preparation of food au gratin or whenglazingis required. Foods are cooked in a closed oven between electrically heated grill bars. This is the most recent and convenient method of cooking in a microwave oven. A microwave oven is a relatively small, boxlike oven that raises the temperature of food by subjecting it to a high-frequency electromagnetic field. it works with the help of electricity. when food are placed inside microwave oven foods are cooked through the hot air released by the oven to cook food. Joshi, Basant Prasad et.al., Fundamentals of Hotel Management-XII, Sukunda Pustak Bhandar, Kathmandu Bhandari, Saroj Sing et.al., Principle of Hotel Management-XII, Asmita Publication, Kathmandu Oli, Gopal Singh et.al., Hotel Management Principle and practices-XII, Buddha Prakashan, Kathmandu
<urn:uuid:23c924ba-be9d-420c-b0a2-afd290004f94>
CC-MAIN-2019-51
https://kullabs.com/classes/subjects/units/lessons/notes/note-detail/6621
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00143.warc.gz
en
0.956182
1,075
3.265625
3
1.351268
1
Basic reasoning
Food & Dining
Most of us are aware of the risks of being overweight. Being overweight can increase the risk of Type 2 diabetes, high blood pressure, heart disease, certain types of cancer, sleep apnea, osteoarthritis and cause pregnancy problems in women, such as gestational diabetes and increased risk of cesarean section. There is a multitude of diets which prescribe hoe to lose weight quickly. Among the best known are the high protein diet, the low carbohydrate diets, the GM diet, etc. However, despite achieving short-term weight loss goals, these plans do not deliver sustainable results. In fact, the best diet for weight loss is the combination of better nutrition through a balanced diet and regular physical activity. The main principles of dieting. Why do we lose weight on a diet? Any gain or loss of weight is the result of a change in our calorie intake. To achieve what is called the energy balance, you have to spend as much energy as you consume. Calories represent the energy value of food. The higher the calories, the more energy we need to compensate. How does a weight loss diet work? To lose weight, it is necessary to create a negative balance, that is to say, to burn more energy than calories ingested. Losing 0.5 kg per week, a reasonable goal is equivalent to spending 3,500 more calories than what was consumed. This implies a negative energy balance of 500 calories per day, which will be achieved by the combination of a reasonable dietary restriction and regular physical activity. Several factors may interfere with the calories consumed variable. For example, it has been shown that the nutritional value of food on the nutrition label can be 20 to 30% higher or lower. Can we really rely on the nutrition label to calculate our calories? Besides, the amount of energy a food contains in calories is not necessarily the amount of energy we absorb, store and/or use. Indeed, we consume less energy from carbohydrates and minimally processed lipids because they are more difficult to digest. It is, therefore, better to eat the less processed possible. Also, we absorb more energy from foods that are cooked because these processes break down the plant and animal cells, thus increasing their bioavailability. Finally, depending on the type of bacteria present in our gut, some people have greater ease in extracting energy/calorie from the walls of plant cells than others. These species of bacteria are known as probiotics. Why is a fast weight loss diet not a good idea? In the vicious circle of diets, the basic metabolism is negatively affected by decreasing it. Basic metabolism is the energy that the body spends on essential functions such as breathing, blood circulation, and so on. These functions make up about 60% of daily caloric expenditure. The higher our basic metabolism, the higher will be our daily energy expenditure. However, drastic diets reduce our basic metabolism. By going on a diet, our body panics and goes into energy saving mode. Therefore, less energy is spent at rest and the risk of gaining weight is much higher. Moreover, age is not on our side. From the age of 20, our basal metabolic rate would decrease by 2 to 3% per year. That’s why the older you get, the more you lose weight. On the other hand, the muscle mass and the level of physical activity increase the basic metabolism, and therefore the energy expenditure. Men on average have a higher basal metabolic rate than women because they have more muscle mass. This highlights the importance of including bodybuilding exercises in our physical activity routine. What is the best possible diet for losing weight? If you decide to lose weight, it is advisable to do it the best way. The drastic methods are too restrictive and deficient in calories, the complete exclusion of certain foods or groups of foods and numerous dietary prohibitions. This can lead to food cravings and a feeling of loss of control. It follows a sense of failure, and then we start another diet. It’s the vicious circle that starts. The more we do, the more the risks to physical and mental health accumulate. What is an excellent pace of weight loss? If you are overweight or obese, losing only 5 to 10% of your weight over 6 months dramatically reduces your risk of heart disease and other health conditions according to Best Dietitian In India. The recommended rate of weight loss to stay healthy is 0.5 to 1 kg per week. Losing weight at this rate will help you maintain your weight afterward, as well as give you the time to integrate your new lifestyle. Maintaining moderate weight loss over a long period is better than losing a lot of weight and getting it back afterward. Indeed, it’s been shown that when they regain lost weight, people mostly take up fat tissue (mass fat) and do not return to their initial muscle mass. Men and women are not equal in the face of weight loss. As mentioned previously, men have a larger muscle mass and therefore a higher basal metabolism, which is favorable for weight loss. However, men, mainly because of hormones, tend to accumulate more visceral fat, dangerous for health. Moreover, men, unlike women, underestimate their degree of obesity. Another difference is that women eat more frequently with their emotions (stress, depression, low self-esteem, general mood), which can interfere with the maintenance of weight loss because we wager with the head and not our stomach. It indicates that there are differences in the way men and women view weight loss. It must be taken into account during changes in lifestyle.
<urn:uuid:68982f31-a0b4-43cf-841e-7a6b3c3b4165>
CC-MAIN-2020-05
https://www.healthscareconcept.com/how-proper-diet-helps-to-lose-weight-easily/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00362.warc.gz
en
0.946175
1,134
3.359375
3
2.554858
3
Strong reasoning
Health
In a matter of weeks, Beijing has made a series of announcements to show its dedication to reducing the harmful effects of coal-fired thermal energy. On July 23rd 2014, Xinhua News Agency announced that four large coal-fired power plants in Beijing will close before 2016, with Gaojing Thermal Power Plant’s six 100MW units shut down already. The central Chinese government reported that the city aims to reduce dependency on coal by 2.6 million tons by the end of 2014, and by an additional 6.6 million tons by 2016 (a total of 9.2 million tons). On August 4th 2014, it was announced on Beijing’s Municipal Environmental Protection website that the city is aiming to be coal-free by 2020. Following this, on August 6th, the government announced a ban on coal imports with a high ash and sulphur content to come into effect on September 1st. Additionally, in June, Greenpeace East Asia reported that six Chinese provinces and two regions have promised to reduce total coal consumption, and another two provinces have set a goal of controlling the growth rate within 2% by the end of 2017. Notably, most of them are located in East China, and altogether these provinces and regions account for 44% of national coal use. These local policies are coherent with national goals from 2015, namely replacing coal with natural gas and non-fossil energy in electricity generation, cutting the percentage of coal in total energy consumption to less than 65% by 2017. By 2020, two of the main targets are to reduce greenhouse gas emissions to 40-50% of GDP by 2005 levels, and to increase the percentage of non-fossil energy in primary energy to 15%. In the long term, China aims to reduce the percentage of coal-fired power generation to 35-40% by 2050. However, despite these local and national policies, according to Greenpeace East Asia, currently China is building coal-fired power plants at an average speed of one per week. In the meantime, another media organisation also reported on July 30th that the country plans to build 50 new coal gasification plants, primarily in northwestern China, away from big cities like Beijing. Greenpeace East Asia reported that this will produce an approximate equivalent of 1/8 of China’s emissions from 2011 – 1.1 billion tons of CO2 a year. Despite Beijing’s promise to end ties with coal, China’s plan to build new coal gasification plants outside Beijing and other large cities merely shifts pollution to other areas. To make matters worse, building gasification plants, which are quite water-intensive, may put further pressure on water security problems in the area. In order to supply energy to a population of over 1 billion, China imported a record 330 million tons of coal in 2013, though according to the research of Carbon Tracker, these imports are predicted to decrease. On the one hand, Chinese thermal coal demand is estimated to reach a cap between 2015 and 2030; on the other hand, China is trying to expand domestic coal supply. These trends may further enlarge over-supply in the international market, especially from Australia and Indonesia. Overall, burning coal of any grade will result in large amounts of toxic emissions. According to the official Xinhua News Agency, coal is the source of 22% of fine particles in Beijing’s air. The side-effects of China’s seemingly never-ending addiction to coal is evident as polluted cities put out daily smog alerts to warn citizens. GEI, 4th Asia-Pacific Climate Change Adaptation Forum, October 1-3, 2014. Zhou, Weisheng;, Proposal on a Low-carbon Community in East Asia and Actions, on Kyoto International Environment Symposium, November 5, 2014
<urn:uuid:6a3d9e4c-9e97-4eda-a443-8c6b4906dfd5>
CC-MAIN-2019-22
https://sekitan.jp/en/info/english-beijing-attempts-to-tackle-air-pollution-from-coal-plants/
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257624.9/warc/CC-MAIN-20190524124534-20190524150534-00333.warc.gz
en
0.950075
770
3.28125
3
2.952051
3
Strong reasoning
Science & Tech.
Aloha! Native Hawaiians are those with ancestry to the indigenous people of the Hawaiian Islands. There are 140,652 Native Hawaiians living in the United States, and Southern California is home to 20,571 Native Hawaiians. Over 50% held at least a high school degree, compared to 20% of the US population, and only 2,393 earned a bachelor’s degree or higher and 2,203 Native Hawaiians in California are living below the poverty level (factfinder.census.gov, 2000). With regards to cancer health disparities, a study of Native Hawaiians in Hawaii found that 64% of Native Hawaiians were obese compared to only 29% in the U.S. population (Curb et al., 1991; Mokuau et al., 1995). Rates of prostate cancer among Native Hawaiians and Samoans rose from 140 to 175 per 100,000 during the 1990s then remained steady through 2006. Rates of uterine cancer among native Hawaiians and Samoans spiked from 30 to 90 per 100,000 in the 1990s then dropped to 60 in 2001-2006 (L. A. Cancer Surveillance Program, 2009). According to Miller et al. (2008) Native Hawaiians in the U.S. had a breast cancer incidence of 175.8 per 100,000, which was significantly higher than that found among non-Hispanic Whites (145.2 per 100,000), and mortality of 33.5 per 100,000, compared to 27.8 per 100,000 for non-Hispanic whites (Miller). Nearly 40% of Native Hawaiian breast cancers in the U.S. were detected at the regional or distant stage, and over 50% of cervical cancers were detected at the regional or distant stage (Miller). In Hawai`i, Native Hawaiian women have the highest mortality rates for cancer of the breast (38 per 100,000) compared to the general population (22.5 per 100,000 for breast) (Gotay). Epidemiological information for Pacific Islanders on the continental U.S. mirror Hawai`i; for instance, a study conducted in Orange County found that the odds for breast cancer incidence among Native Hawaiians and other Pacific Islanders was almost 2.3 times that of non-Hispanic whites (Marshall). Later cancer diagnosis stages for Pacific Islanders account for much of the disparities in their survival. According to Marshall and colleagues (2008), Orange County Hawaiians and other Pacific Islanders were over 2.4 times more likely to have later stages of breast cancer diagnosis (6). In a 2006 cervical and breast cancer knowledge and screening study (Tran et al.), of 157 Native Hawaiian women only 46.5% of women in the study were able to accurately describe screening procedures for cervical and breast cancer. In September of 2013, the Cancer Prevention Institute of California published cancer incidence trends among Native Hawaiians between the years of 1990 and 2008. Among Native Hawaiian males, the top five diagnosed cancers were prostate, lung, colon/rectum, non-Hodgkin lymphoma (NHL), and stomach cancer. All five types were observed to have decreasing rates over the 19-year period. Despite a 0.4 percent decrease annually, prostate cancer remains the most commonly diagnosed cancer. Within the Native Hawaiian female population, the top five diagnosed cancers were breast, lung, uterine corpus, colon/rectum and pancreas. Both breast and lung cancer rates were observed to increase during the first half of the 19-year period and a decrease in the latter half. Cancer of the uterine corpus decreased by 2.7 percent annually, while colon/rectum and pancreatic cancer rates remained stable. Cancer Incidence Rate Percent Change 2since late nineties Source: CPIC Native Hawaiian Cancer Factsheet: Pacific Islander Health Partnership (PIHP): PIHP is a nonprofit collaborative of community and faith–based organizations serving Native Hawaiians and Pacific Islanders “to reduce health disparities through education, training, advocacy, building island community capacity for health”. PIHP includes Hawaiian Civic Clubs (`Ainahau o Kaleponi & `Ahahui o Liliuokalani HCC), Kama`aina Club of Orange County, Richard Kane Health Education Foundation, Hawai`i Daughters Guild, glee clubs, outrigger canoe, golfing, surfing groups, Hawai`i student organizations, and all those who identify with Hawai`i and the Hawaiian cultural network. PIHP actively presents a variety of Aloha Seniors’ cultural arts and senior programs, conducts breast & cervical cancer surveys, sponsors Pacific Island Sleepover at the Aquarium of the Pacific in Long Beach, conducts health screening activities at various Hawaiian community festivals, engaged in diabetes and cancer education and training projects, partners with CalOptima on diabetes and cancer education activities. Contact: Jane Ka`ala Pang, RN, BSN, PHN, Program Manager Pacific Islander Health Partnership 12900 Garden Grove Blvd., #214-A Garden Grove, CA 92843 Tel: (714) 968-1785 - Cancer Prevention Institute of California (2013). Cancer Incidence Trends among Native Hawaiians in the United States, 1990-2008. Retrieved from http://www.cpic.org/files/PDF/Cancer_Registry/Fact_Sheets/Native_Hawaiian_Cancer_Fact_Sheet.pdf - Curb JD, Aluli NE, Kautz JA, Petrovitch H, Knutsen SF, Knutsen R, O’Conner HK, O’Conner WE. Cardiovascular risk factor levels in ethnic Hawaiians. American Journal of Public Health, 1991, 81(2): 164-167. - Los Angeles Cancer Surveillance Program (2009). Cancer in Los Angeles County, Trends by Race/Ethnicity. - Mokuau N, Hughes CK, Tsark JU. Heart disease and associated risk factors among Hawaiians: Culturally responsive strategies. Health & Social Work, 1995, 20(1): 46-51. - Tran, J.H., Mouttapa, M., Ichinose, T. Y., Pang, J. K., Ueda, D. & Tanjasiri, S. P. 2010. Sources of information that promote breast cancer and cervical cancer knowledge and screening among Native Hawaiians in southern California. Journal of Cancer Education (in press). - U.S. Census Bureau, “American FactFinder”, <http://factfinder.census.gov>; retrieved June 16, 2010.
<urn:uuid:deeb41f9-1f7a-4e4e-99bd-42d04334679e>
CC-MAIN-2022-49
https://research.cgu.edu/wincart/communities/native-hawaiian/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00317.warc.gz
en
0.896992
1,517
3.265625
3
2.707966
3
Strong reasoning
Health
On December 19, 1918, the first Believe It or Not! cartoon was published, then called “Champs and Chumps” and featuring a collection of sports oddities Ripley had saved. In December 1922, Ripley embarked on his first around the world trip and returned on April 7, 1923. He published his travel journal in installment form.. Ripley’s Fun Fact Ripley visited a number of ancient and holy sites as he toured through western Asia, southern Asia, and the Middle East in countries including Iraq, Israel, Afganistan, Jordan, Pakistan, Syria, and more. Ripley hired Norbert Pearlroth in 1923 as a part time linguist and translator, but he soon became the man behind the cartoon curtain. Pearlroth’s expertise and researching capabilities made him a lifelong partner. “A Walking Encyclopedia.” Robert L. Ripley In 1929, Ripley published the very first Believe It or Not! book, which flew off bookstore shelves. On July 9, Ripley joined Hearst’s King Features Syndicate and the Believe It or Not! cartoon went from being published in 17 papers to worldwide distribution. “Truth is stranger than fiction.” -Robert L. Ripley Ripley signed with NBC in the 1930s making it the beginning of his 14-year run on radio. Ripley’s Fun Fact Robert Ripley was the first to broadcast underwater with his Marineland stunt in St. Augustine, Florida, on February 23, 1940. The second Believe It or Not! book was published. Ripley created movie shorts for Vitaphone Pictures made by Warner Bros. The first, biggest, and most successful national Believe It or Not! contest held. After sifting through millions of submissions, the grand prize winner was announced as Clinton Blume. The first Odditorium opened in Chicago, Illinois, at the World’s Fair. Inside the museum were dozens of Ripley’s famous cartoons, live performers, and hundreds of strange and exotic artifacts Ripley acquired on his worldly travels. The success of the Odditorium led to several more appearances at world expositions across the country. Robert Ripley died on May 27 after collapsing on the set of his weekly television show. Hundreds attended the service including celebrities, journalists, athletes, and cartoonists. The first permanent Believe It or Not! museum opened in St. Augustine, Florida, which still operates in its original location at Warden’s Castle. Ripley’s Fun Fact Today, Ripley’s Believe It or Not! St. Augustine 20,000-square-foot attraction boasts three floors of exhibits, including some of Ripley’s original collection. Over the years, Ripley’s iconic St. Augustine Red Train Sightseeing Tours as well as Ghost Adventure Tours joined the St. Augustine family of attractions. Successful national Ripley’s Believe It or Not! TV show was broadcast for 82 episodes, starring Jack Palance and his daughter Holly Palance. The company took its biggest step to diversifying its line of attractions with the opening of the first Ripley’s Aquarium in Myrtle Beach, South Carolina. In 2004, Ripley Publishing was launched with the successful New York Times bestseller Ripley’s Believe It or Not!, with 2.2 million copies sold. Today, the series is released every year with all-new content. The annual Ripley’s Believe It or Not! book is translated into dozens of languages and published in foreign markets. To date, Ripley Publishing has sold over 10 million books! To date, Ripley’s has published more than a 1,000 new creative content pieces and amassed millions of online followers.
<urn:uuid:9449f03e-198a-47d0-991b-afc9e98f7721>
CC-MAIN-2021-25
https://www.ripleys.com/a-century-of-strange/
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488567696.99/warc/CC-MAIN-20210625023840-20210625053840-00013.warc.gz
en
0.954244
800
2.9375
3
1.445518
1
Basic reasoning
Entertainment
How can we help? You can also find more resources in our Select a category Something is confusing Something is broken I have a suggestion What is your email? What is 1 + 3? anatomy of blood vessels thin tunica externa thick tunica media narrow and regular tunica intima thick tunica externa thin tunica media irregular tunica intima layer of endothelium layer of smooth muscle and elastic fibers layer of collagen fibers most superficial tunic thin tunic of capillaries specially thick in elastic arteries contains smooth muscle and elastin has a smooth surface to decrease resistance to blood flow. veins need valves to create pressure to pump the blood to the heart. valves assist in returning venus blood to the heart. why are valves present in veins but not in arteries muscular pump (milking action of skeletal muscle) and respiratoy pump (thoraxic cavity pressure) name two events occurring within the body that aid in venous return because arteries are closer to the pumping action of the heart, their walls must be strong enough to take the changes in pressure. why are the walls of arteries proportionately thicker than those of the corresponding veins the arterial system has one of these, the venous system has two. these arteries supply the myocardium internal carotid and vertebral two paired arteries serving the brain longest vein in the lower limb artery on the dorsum of the foot checked after leg surgery deep artery of the thigh serves the posterior thigh supplies the diaphragm formed by the union of the radial and ulnar veins basilic and cephalic two superficial veins of the arm artery serving the kidney veins draining the liver artery that supplies the distal half of the large intestine drains the pelvic organs what the external iliac artery becomes on entry into the thigh major artery serving the arm supplies most of the small intestine join to form the inferior vena cava an arterial trunk that has three major branches, whic run to the liver, spleen and stomach major artery serving the tissues external to the skull anterior tibial, fibular and posterior tibial three veins serving the leg artery generally used to take the pulse at the wrist
<urn:uuid:8ea71117-515c-4941-a294-df6a489d40ae>
CC-MAIN-2017-17
https://quizlet.com/6074380/anatomy-of-blood-vessels-flash-cards/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00441-ip-10-145-167-34.ec2.internal.warc.gz
en
0.809635
514
3.53125
4
1.150424
1
Basic reasoning
Health
Before you start writing a character, it’s important to understand what a character is. I don’t mean that question in the psychological sense, but in the mechanical one. In general terms: why do stories need characters? In particular ones: what kind of character do you need at this moment? To answer those questions clearly, it’s important to have an understanding of what characters actually do. All art – regardless of type, school, or genre – exists to evoke an emotional response, to make the person experiencing it feel something. Paintings and photographs do it with lighting and composition. Film and theatre do it with movement and facial expressions. Music does it with keys and crescendos. However, though all of these different types of art might tell a narrative, a piece of narrative art itself primarily evokes emotions through the use of characters. The person experiencing the story uses the characters within it as a medium to interpret the events of that story. In the simplest terms: in a story, things happen to characters, and the result is what is supposed to evoke the desired emotion from the audience. Of course, not just any emotion will do. Usually, when you write a story, you want the reader to feel a particular range of emotions in a given circumstance. Ultimately, that’s what a story is judged on. If a narrative doesn’t evoke the intended emotion, it feels wrong If there is a disconnect between how the audience feels, and how a narrative wants them to feel, then you have a bad story. All of this is to say that control over how what emotions you evoke at any given time is key to a good story, and as a writer, your most important tools in doing so are your characters. Audiences will cheer when good things happen to characters they sympathise with, even if the ‘story’ says they’re the villain. Audiences will be become sad when sympathetic characters have bad things happen to them. If those bad things were imposed by another character, they might get angry. If that other character is also sympathetic, they might feel the sort inner conflict which gives a story moral complexity. The emotions your story evokes at any given time is ultimately a consequence of your plot colliding with your characters. Whether or not you evoke the desired emotions depends on how the audience regards the characters in question. Which means the very first question you need to ask yourself when you’re developing a character is: ‘How do I want my audience to feel about this character?’ The answer to that question should be the basis of what that character is. Naturally, you’ll want readers to feel different ways towards different characters, but you’ll always want them to feel something. Not for nothing are the eight words no author wants to hear ‘I don’t care what happens to these people’. To establish, maintain, and control this emotional connection between character and reader, a writer needs to determine and ultimately demonstrate three properties of any given character: who they are at first glance, how they became the way they are, and how they react to the events and setbacks which the world throws at them, or more concisely: Concept, Circumstances, and Conflict. A character’s Concept is who they are socially, emotionally, and physically, distilled. It’s how you would describe that character to the uninitiated in one sentence or less. If you want a character to be interesting, then creating an interesting Concept is vital. Personally I believe that the best character Concepts are the ones which introduce themselves as contradictions: the pacifist soldier, the coward with a hero’s reputation, the power-hungry idealist. That’s because a contradiction automatically begs two questions of the audience: ‘How did they become so contradictory?’ or ‘if those two contradictory elements came into conflict, which one would win?’. These are not questions you should answer immediately. The key to maintaining an audience’s interest in a character is suspense, and suspense can ultimately be described as what takes place between making the audience ask a question, and answering that question. However, that doesn’t mean you should ideally keep everything not revealed in a character’s establishing scene a secret until the moment of a final reveal. An audience will lose interest in a character if nothing new ever happens to them. This means that to keep interest, a writer often needs to keep feeding it. This is where Circumstances and Conflicts come in. Circumstances are, effectively, the character’s backstory. Since nobody is born fully emotionally or physically formed, Circumstances serve as the answer to the first question, that of ‘why is this character like this?’ Conflicts, on the other hand, are elements which challenge the character in ways which require a response. How that character responds to that challenge throws greater light on who they are, answering the second question. Characters are made compelling by the way that these three elements – Concept, Circumstances, and Conflicts – interact. An interesting Concept means that the audience wants to know more about that character – a need that is fed by the uncovering of that character’s Circumstances. Circumstances, in turn, not only provide context to a character’s Concept, but also to the way they handle Conflicts. Meanwhile, Conflicts not only serve to reveal hidden parts of a character’s Circumstances and Concept through the way that character reacts to them, but it also might change some part of the character’s Concept itself. Working together, these three elements catch your audience’s attention and keep them emotionally invested, but they also serve as levers for you to set what emotions you want your audience to feel. Make the audience feel good by having a Brave-but-uncertain (Concept) character overcome (Conflict) a past trauma (Circumstance). Make the audience feel sad or helpless when a Moral-but-loyal (Concept) character inflicts harm (Conflict) because of a childhood promise (Circumstance). So long as you have these three factors in combinations that fit within the internal logic of your story, you should be able to have all the ingredients necessary not only to make a character which your audience will be interested in, but one which will make them feel the emotions you need them to so that they can serve your narrative.
<urn:uuid:fe354fcd-51ca-46c4-afec-aefbb46d6fcf>
CC-MAIN-2021-39
https://cataphrak.com/patreon-content/writing-and-worldbuilding/april-2020-building-character/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00038.warc.gz
en
0.957111
1,354
3.40625
3
2.799021
3
Strong reasoning
Literature
Mark 5:9 "And [Jesus] asked him, What is thy name? And he answered, saying, My name is Legion: for we are many." (King James Version)1 The story of Jesus casting the demons out of the possessed man and into a herd of pigs is told in Mark 5:1-20, Matthew 8:28-34, and Luke 8:28-39. The Gospel according to John does not tell the story. Of the three, Mark is the one that quotes the line as it is most famously remembered. Mark is also the only one to give an estimate of the number of demons, in verse 13 (about two thousand). Matthew does not name the demons, although, like Mark, Luke calls them Legion (verse 30). Luke (verse 26) and Mark (verse 1) agree on the name of the location (the country of the Gadarenes), but Matthew (verse 28) claims it happened in the country of the Gergesenes. These are likely just two names of the same place.2 Luke (verse 27) and Mark (verse 2-3) agree one man was possessed, but Matthew (verse 28) claims two possessed men. All authors claim the possessed were living in the tombs, and that the demons begged to be sent into the nearby herd of pigs when Jesus threatened them (see Matthew 8:29-31 below). Luke (verse 31) adds that the demons preferred this to being sent "out into the deep". All authors agree the pigs then immediately ran into a lake or the sea, and drowned, and the possessed man (or men) was cured. This story is sometimes literally translated as a real account of demonic possession, but other times interpreted as an allegory for salvation from temptation and evil, or just an overwhelming number of problems and other suffering. The fact that there are many demons is important to this translation, because while most people can easily deal with one or two temptations, or a small number of problems in their life, it can often become overwhelming when they start multiplying out of control. The message here is that with God's help, we can overcome any difficulties, no matter how numerous. Since Matthew's account is the shortest, I will reproduce it here (although Matthew's account does not mention the name Legion). - And when he was come to the other side into the country of the Gergesenes, there met him two possessed with devils, coming out of the tombs, exceeding fierce, so that no man might pass by that way. - And, behold, they cried out, saying, What have we to do with thee, Jesus, thou Son of God? art thou come hither to torment us before the time? - And there was a good way off from them an herd of many swine feeding. - So the devils besought him, saying, If thou cast us out, suffer us to go away into the herd of swine. - And he said unto them, Go. And when they were come out, they went into the herd of swine: and, behold, the whole herd of swine ran violently down a steep place into the sea, and perished in the waters. - And they that kept them fled, and went their ways into the city, and told every thing, and what was befallen to the possessed of the devils. - And, behold, the whole city came out to meet Jesus: and when they saw him, they besought him that he would depart out of their coasts.
<urn:uuid:21ee5f45-9d0d-433a-a4bd-e297f6362bc5>
CC-MAIN-2015-40
http://www.everything2.com/index.pl?node_id=1467255
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673632.3/warc/CC-MAIN-20151001215753-00115-ip-10-137-6-227.ec2.internal.warc.gz
en
0.955918
728
2.84375
3
2.945057
3
Strong reasoning
Religion
Here’s some interesting data: More than two-thirds of U.S. public school districts provide wireless internet access, computing devices and interactive whiteboards, according to the CDW-G 2011 21st-Century Classroom Report. In order to fully integrate these powerful learning tools, however, educators still need a comprehensive set of best practices for effective use of technology in the classroom. A new study of 58 Missouri school districts plans to address this need and provide a roadmap on how technology and professional development can improve student performance. The research is a collaboration between the eMINTS National Center, which offers research-based professional development courses to K-20 educators, the American Institutes for Research and private-sector partners, led by CDW-G. The effort is underwritten by an Investing in Innovation Fund (i3) grant from the U.S. Department of Education. “Schools are not able to provide teachers with as much professional development as they probably need,” says Monica Beglau, executive director of eMINTS. “In order to achieve the results schools want from technology, you need a long-term plan for professional development. Three or four days of laptop training is not enough,” she insists. “I think the study results will show that the best approach is a significant, long-term professional development program.” eMINTS professional development uses interactive group sessions and in-classroom coaching and mentoring to help educators integrate technology into their teaching. The eMINTS instructional model has demonstrated positive effects on student achievement in more than 3,500 classrooms across the United States. During the three-year study, researchers will track teacher progress through annual classroom observation and surveys. Researchers will also measure student progress and performance using scores from Missouri state assessments, a 21st-century skills assessment from Learning.com and student surveys. Seventh- and eighth-grade classrooms in 58 high-need rural districts in Missouri are participating in the program. Districts were randomly assigned to one of three groups with varying levels of professional development programs and technology. eMINTS and CDW-G deployed a total of 2,696 student notebooks from Lenovo and 201 teacher notebooks from Lenovo to Group 1 and Group 2 districts. Districts in Group 3 will receive student and teacher devices when the research is complete in 2015. For teachers, the devices are the center of classroom instruction, controlling the classroom technology, and serving as teachers’ personal learning devices for professional development. Student use the devices in each core content area – language arts, mathematics, science and social studies – to research, write and learn. Data collection will be complete in the spring of 2014, and analysis will be published in January 2015. For more information about the program, click here: http://www.emints.org/.
<urn:uuid:bf5722ee-5c78-45eb-9ebc-a715d8418146>
CC-MAIN-2020-24
https://edtechdigest.com/2012/04/10/trends-designing-a-new-roadmap-for-student-performance/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410535.45/warc/CC-MAIN-20200530231809-20200531021809-00237.warc.gz
en
0.953358
578
2.828125
3
2.919028
3
Strong reasoning
Education & Jobs
In what may be a memorably controversial and groundbreaking new research paper, scientists from China describe in detail the way in which they were successful at manipulating the genomes, or genetic blueprints of human embryos for what is the first time in human history, reintroducing all new ethical concerns about what just may be the next frontier for science. The story was first reported by Nature News this Wednesday, and their paper was originally published by a little known online journal called Protein and Cell. In their paper, Junjiu Huang, who is a gene-function researcher from Sun Yat-sen University of Guangzhou, and his colleagues describe the way in which they edited the genomes from embryos they received from a fertility clinic. The embryos were described as non-viable in the paper, ones that would not be capable of resulting in a live birth because a genetic replication error resulted in them containing an extra set of chromosomes from being fertilized by two different sperm. The researchers “attempted to modify the gene responsible for beta-thalassaemia, a potentially fatal blood disorder, using a gene-editing technique known as CRISPR/Cas9,” according to the report from Nature News. “The researchers say that their results reveal serious obstacles to using the method in medical applications.” The researchers injected the CRISPR into 86 different embryos and then they waited another 48 hours for the molecules that would replace any missing DNA to begin their work. 71 of the embryos survived, and 54 out of that number were then tested. Researchers learned that just 28 of the embryos had been “successfully spliced, and that just a small fraction of these successes contained the necessary replacement genetic material,” read the report. “If you want to do it in normal embryos, you need to be close to 100 percent,” Huang said in a statement to Nature News. “That’s why we stopped. We still think it’s too immature.” What was more concerning, however, is that there were a “surprising number” of unintended mutations that occurred during the process, and accelerating at a speed that was far higher than anything seen in earlier gene-editing studies which used either mice or adult human cells. Such mutations, moving at an unchecked speed, could be harmful, and they are one of the primary reasons for why people in the scientific community are expressly concerned. It’s a worry that grew when rumors of Huang’s research team began to circulate at the beginning of the year. “It underlines what we said before: we need to pause this research and make sure we have a broad based discussion about which direction we’re going here,” said Edward Lanphier, president of Sangamo BioSciences in Richmond, California, in an interview with Nature News. While the gene editing technique has shown some unprecedented success, there is the question of what effect rapid rates of mutation may have – bringing to light some potential disorders that the scientific community is not yet aware of. George Daley, who is a stem-cell biologist at Harvard Medical School of Boston, Massachusetts, was careful in his praise of the research, describing it as “a landmark, as well as a cautionary tale.” “Their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes,” he said to Nature News. More studies may be coming to light soon. So far, there are rumors of at least four other Chinese research teams also actively working on human embryos, according to the report. James Sullivan is the assistant editor of Brain World Magazine and a contributor to Truth Is Cool and OMNI Reboot. He can usually be found on TVTropes or RationalWiki when not exploiting life and science stories for another blog article.
<urn:uuid:1a47b7d8-a459-45d6-a07b-58129d4e872b>
CC-MAIN-2023-14
https://cosmoso.net/gene-editing-technique-has-been-used-on-human-embryos/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00727.warc.gz
en
0.962424
809
2.984375
3
2.975733
3
Strong reasoning
Science & Tech.
Presbyopia is a refractive error that makes it difficult to see objects up close and perform tasks that require near vision, like reading. Presbyopia is not the same as hyperopia, which is also known as farsightedness. Hyperopia occurs due to the shape of the cornea and the length of the eyeball, refracting light to a point just behind the retina. Most people with hyperopia were born with the condition or develop it in their youth. Presbyopia, on the other hand, develops over time, usually after the age of 40. As you get older, the natural lens in your eye loses elasticity which prevents it from changing shape to focus on close objects. Request an appointment to find out if you are developing presbyopia and what you can do about it. Much like wrinkles, gray hair, and senior moments, presbyopia is an inevitable part of ageing which will eventually affect all of us. While some people may develop symptoms of presbyopia earlier than others, wait long enough, and everyone will have it. Monovision contact lenses involve having different prescriptions on each lens: one lens is responsible for near vision, and the other is responsible for far vision. This makes it easier for the wearer to seamlessly switch their gaze between close up objects and distant objects. Progressive lenses offer multiple corrective powers in a single lens without the harsh line of bifocals. These lenses use a gradient effect allowing you to transition from distance to intermediate to near vision smoothly. Multifocal contact lenses are similar to progressive lenses but in contact lens form. These contacts offer different corrective powers in different areas of the lenses with a gradient pattern of concentric rings, offering a smooth transition from distance to near. Corneal inlays are a surgical alternative to reading glasses. The surgery required to implant these inlays is relatively non-invasive, and according to Ophthalmology Times, yields remarkable results. Many corneal inlays look like very small contact lenses which are implanted inside the cornea of the non-dominant eye, changing the way light refracts in the eye and providing clearer near vision. At Stoney Creek Eye Care & Eyewear Boutique our trusted team of eye doctors is committed to providing you with high-quality eye care, stylish frames, and personalized attention. We offer a wide variety of services, including: We understand that life is hectic. We want to help make it a little simpler, which is why our centrally-located practice offers extended hours on Mondays and Saturdays. We’re also happy to directly bill most major insurance companies on your behalf. And on top of all that? We’ve got plenty of free parking. Stoney Creek Eye Care & Eyewear Boutique will always do whatever we can to make your life a little easier, located in the Health Science Building.
<urn:uuid:30e6ff9a-b20f-4aa7-915c-6ba3e3c3ad17>
CC-MAIN-2019-43
https://stoneycreekeyecare.com/service/presbyopia/
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986659097.10/warc/CC-MAIN-20191015131723-20191015155223-00115.warc.gz
en
0.951437
595
2.6875
3
1.709738
2
Moderate reasoning
Health
We often assume that our appetite depends on how much food we’ve eaten, but a new study conducted in a completely dark restaurant has demonstrated that we don’t feel any more full if secretly slipped extra large portions of food. What we see, it seems, plays a big role in how hungry we feel. The research, led by psychologist Benjamin Scheibehenne and published in the journal Appetite, invited participants to have lunch in a restaurant in downtown Berlin. While the entrance bar was lit, the restaurant itself was pitch black and the volunteer ‘customers’ were served by blind waiters and waitresses who were capable of working in the dark. The ‘customers’ ate two main courses in the dark dining area, but what they didn’t know, was that half were served normal-sized portions while the other half were served super-size portions that were more than a third bigger. Afterwards, the light was switched on and they were offered a dessert that they could serve themselves. The researchers measured how much dessert each person ate and the diners were asked to fill in a questionnaire where they estimated how hungry they were, how much they ate and whether they liked the food. Exactly the same experiment was run a few weeks later, with different volunteers, but with everyone eating in the light, as in a normal restaurant. For those who could see what they were eating, the size of their main course had a big effect on how full the diners felt and how much dessert they ate afterwards. But for those who dined in the dark, portion size didn’t seem to make a difference. In other words, people were experiencing fullness based as much on their visual estimation of how much food they were eating as their actual physical consumption. Eating without seeing means we unwittingly eat more and feel less hungry. This chimes was a 2005 study, where a research team created soup bowls that secretly refilled for some of the diners to the point where they ate three quarters more soup than others. Despite this, those diners with the ‘bottomless soup bowls’ did not believe they had eaten more, nor did they feel themselves as more full than those eating from regular bowls. The researchers from the Berlin study note that these findings show the importance of context for healthy eating and make an interesting point about how something as common as eating in front of the TV may affect how much we eat, simply by affecting how much we focus on our food. Link to PubMed entry for study.
<urn:uuid:b1ebd016-7373-4d9f-8189-752441dd24fb>
CC-MAIN-2014-42
http://mindhacks.com/2010/08/17/dark-restaurant-alters-appetite-and-eating/
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446323.3/warc/CC-MAIN-20141017005726-00338-ip-10-16-133-185.ec2.internal.warc.gz
en
0.991139
526
3.234375
3
2.905434
3
Strong reasoning
Health
Pope (emeritus) Benedict XVI is widely considered to be one of the most influential Catholic theologians of the past 100 years. One of his most famous books is entitled The Spirit of the Liturgy, which he published in the year 2000 as Joseph Cardinal Ratzinger, five years prior to his election to the papacy. Among the topics that Ratzinger examines in that book is the relationship of the liturgy to time and space. As Catholics, we are familiar with liturgical worship, especially at Mass. But many other Christian traditions eschew intricate rituals and formal prayers that take place in an ornately-decorated buildings, in favor of simpler gatherings that emphasize sermons and singing as well as trying to imitate Christ in the world by loving people in our daily lives (something we’re supposed to do too). The reason for the stark difference in the form of worship, however, is that they view the sacrifice of Christ as a completed event that took place 2000 years ago, and the world as the “new Temple” where we worship the Lord and live our lives with Him. The Mass, rather than something sacred, could thus be interpreted as play-acting (at best) or even (at worst) a sacrilegious attempt to re-sacrifice Christ by returning to Old Testament Temple rituals. In his consideration of Catholic liturgy, Ratzinger tries to explain why Catholic liturgy is neither of these things. The key, of course, is Christ Jesus. As Catholics, we believe that Jesus of Nazareth is God. As God, Christ transcends time and space, which are part of His creation. At the same time, we believe that Our Lord has a human nature, which means that He entered time and space and truly experienced them as a man. Christ is the bridge between heaven and earth, the One who brings together God and humanity in Himself. He makes it possible for us, who live in space and time, to enter into the eternal relationship of Father and Son that transcends space and time. Ratzinger explains that, through liturgy, “time is drawn into what reaches beyond time.” When Jesus, at the Last Supper, instructs His disciples to “do this in memory of Me,” He establishes the means by which His one sacrifice is extended forward in time and space. The Mass is the way you and I, as members of the Mystical Body of Christ (the Church), are brought into Christ’s eternal offering of Himself to God the Father. Ratzinger explains that, at the Mass and other liturgical actions, “we do indeed participate in the heavenly liturgy, but this participation is mediated to us through earthly signs, which the Redeemer has shown to us as the place where His reality is to be found.” The liturgy is where “the Shepherd takes the lost sheep onto his shoulders and carries it home.” The Mass, therefore, is not a historical re-enactment. That’s why we do not try to make everything look like it did at the Last Supper 2000 years ago. It is, rather, the way the Church enters into the event that transcends time and space. The Mass is the unveiling of the heavenly liturgy in the midst of the world. It “makes present” an event that took place 2000 years ago and which continues for all eternity in Heaven. That event is the Incarnation, the sacrifice, the resurrection, and eternal glorification of Christ Jesus. And it’s possible because He is both God and man, both transcending and present in time and space.
<urn:uuid:03049c6a-056d-4101-be67-be8932b3693d>
CC-MAIN-2023-06
https://stcecilia-stgabriel.org/2020/12/15/liturgy-part-5/
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00035.warc.gz
en
0.970579
757
2.59375
3
2.925337
3
Strong reasoning
Religion
What Do Baby Chickens Eat From the Time They Hatch Raising baby chickens is one of the most satisfying experiences you’ll ever have. However, do not make the mistake of believing that chickens of every age consume or eats the same type of feed since they certainly do not! From the time they hatch, baby chickens are all set for their very first meal. Sufficient nutrition in the first 6 to 8 weeks of their life is important to raising happy, healthy chickens. From starter feed to foraging for pests, timing is crucial when it concerns feeding these starving, peeping poultry. Baby chickens require to consume or eat two things to become healthy adult chickens: Potable Fresh Water is One of the Most Important Components on What do Baby Chickens Eat Baby chickens need to have fresh, clean water readily available to them 24/7. This needs to be provided to them through a unique chick waterer created to give them access to the water without sufficient space for them to fall in. It’s essential to keep inspecting the water and make sure that no manure or dirt has actually been kicked up into the water and thus has actually polluted it – so you might need to change the water over frequently. However, it will be worth it! 2. What do baby chickens eat: Chicken Starter Feed The main food source that baby chickens eat is chicken starter feed, a feed specifically developed to have the ideal nutrients growing chickens require. The anatomy of chick starters starts with the essential nutrient-protein. Next to water, protein, both plant and animal, is the 2nd most vital nutrient for young chicks. This star bodybuilder promotes the development of muscles, tissues, and organs-it’s essentially what makes your little ones grow. Do not hesitate to offer your young chicks some small worms as they would love it too! Fats, carbohydrates, and vitamins and minerals make up the remainder of the cast of nutrients required by your growing baby chickens. Once again, keep inspecting the starter feed to guarantee that no manure or dirt has infected the food – fresh is finest! The simplest method to make sure that chicks of all kinds get all the nutrients they require is to feed them a commercial starter mash consisting of a mix of grains, protein, vitamins, and minerals. Starter mash has a high amount of protein and lowers in calories than feed rations developed for older poultry. Never feed layer ration to baby chicks. The higher calcium content can seriously harm young kidneys. Continue feeding chick starter mash ration to your baby chickens till they are nearly old enough to lay eggs, at which time they will require a layer feed ration. The primary feed modifications that occur as chicks grow are the significantly higher quantities they consume and the type of feeder utilized. Chick Feeders for Baby Chickens Practically instantly, when baby chickens are placed in the brooder, they usually look for things to peck. To help them discover something to peck, sprinkle a little starter on a paper plate or a paper towel. Once they consume up all of that starter, they’ll take a look around for more and will discover the chick feeder. Chick feeders are developed to keep feed hygienic by avoiding baby chickens from walking, sleeping, scratching, and pooping in it. For the first couple of weeks, around feeder base that screws onto a narrow mouth pint or quart size mason container works well and uses up a little brooder area. Another economical alternative is a one-quart mix plastic feed holder and base. Chicks of all types quickly outgrow their very first feeders. They will start to consume more, clearing the feeder too rapidly. Their heads will grow too huge to suit the chick feeder design. And they will begin roosting on top of the feeder, making a mess below. At this point, they will require a feeder with a bigger capacity and a bigger base. An adjustable-height hanging feeder is perfect for this purpose. As chicks grow bigger, the feeder’s height may be quickly adapted to the same height as the birds’ backs, leading to very little waste of feed. This design of feeder also has a roosting guard to keep baby chickens from perching on top. Whenever you alter to a different feeder, leave the old one in location for a couple of days until you’re sure all the chicks are eating from the new one. Keep your chicks well provided with a nutritious chicken starter mash ration, and they will reward you by being healthy and growing well. How Much Do Baby Chickens Eat Recommended Feeding Amounts for Newly Hatched Birds: - Layer Baby Chickens: 9-10 lbs per chick in the first ten weeks or approximately 1 lb per week per bird - Broiler Baby Chickens (based on Cornish Game Birds): 8-9 lbs per chick in the first six weeks, approximately 1.2 lbs per week per bird. Owners of brand-new baby chicks frequently ask questions on how much the chicks will consume or eat. Watch with Nutrena Poultry Specialist Twain Lockhart as he discusses how many baby chicks eat in their very first weeks of life, along with pointers and ideas on how to get them eating quickly. How Often Do We Need to Feed Baby Chickens It is highly recommended to baby chickens to feed freely 24/7. The crops of baby chickens can only hold a small quantity of food at one time, getting rid of the possibility of overeating. Baby chickens on a limited or restricted feeding schedule might end up not consuming and avoiding meals if their crops have not yet emptied, causing them to lose out on important nutrition. What Do Baby Chickens Eat in The Wild? In the wild, baby chickens eat a wide variety of bugs, greens, and even little worms. As they grow and become stronger, they become more able to look for other delicacies like frogs and even small mice. Yes, it’s true, chickens are omnivores. Baby chickens are no exception; however, they take it slow when they are small and stick to bugs and greens unless their mother helps them with a special meaty reward.
<urn:uuid:e735ed6e-1062-4b57-8e04-8a4ad5cfaf67>
CC-MAIN-2021-04
https://poultryfeedformulation.com/what-do-baby-chickens-eat/
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00637.warc.gz
en
0.948197
1,274
2.6875
3
1.680329
2
Moderate reasoning
Home & Hobbies
Cenozoic ectothermic continental tetrapods (amphibians and reptiles) have not been documented previously from Antarctica, in contrast to all other continents. Here we report a fossil ilium and an ornamented skull bone that can be attributed to the Recent, South American, anuran family Calyptocephalellidae or helmeted frogs, representing the first modern amphibian found in Antarctica. The two bone fragments were recovered in Eocene, approximately 40 million years old, sediments on Seymour Island, Antarctic Peninsula. The record of hyperossified calyptocephalellid frogs outside South America supports Gondwanan cosmopolitanism of the anuran clade Australobatrachia. Our results demonstrate that Eocene freshwater ecosystems in Antarctica provided habitats favourable for ectothermic vertebrates (with mean annual precipitation ≥900 mm, coldest month mean temperature ≥3.75 °C, and warmest month mean temperature ≥13.79 °C), at a time when there were at least ephemeral ice sheets existing on the highlands within the interior of the continent. Consistent with the geological evidence, it has been hypothesized that the formation of Antarctic ice sheets predates the final break-up of Gondwana, the opening of the Drake Passage and the thermal isolation of the continent1,2,3. This is reflected by a low diversity of terrestrial mammals on the Antarctic Peninsula during the middle to late Eocene with only two species of large mammals and ten species of small mammals4,5,6 which sharply contrasts with the highly diverse marine fish fauna indicating temperate conditions in the Weddell Sea7,8. However, no Cenozoic ectothermic continental vertebrates (freshwater fishes, amphibians and reptiles) have been known from Antarctica so far. Here we report the discovery of a fossil ilium from Seymour Island, Antarctic Peninsula (Fig. 1a,b) which can be assigned to the lissamphibian order Anura, and a fragment of a sculptured skull bone that most probably derived from a hyperossified anuran. We assign the specimens to the South American genus Calyptocephalella. Calyptocephalellids, or helmeted frogs, are widely known from Patagonia since the Late Cretaceous9. They became extinct in Argentine Patagonia during the Miocene, probably related to a decrease of humidity caused by the rise of the Andes, since the family survived to the present day in a temperate but humid refuge in the central Chilean Andes10. The material described here derives from estuarine to marginal-marine deposits of the Eocene La Meseta Formation which were deposited in the James Ross Basin, a back-arc basin east of the Antarctic Peninsula, and which are widely exposed on Seymour Island11,12 (Fig. 1b,c). The fossil locality IAA 2/95, also known as ‘Marsupial Site’ is a few m2 wide lens of poorly consolidated, shelly conglomerate8,13. It is situated in the central portion of the Cucullaea I Allomember, within unit Telm 5 on the northwestern slope of the mesa (Fig. 1c,d), and informally referred to as the ‘Natica horizon’8,13. It has produced shark, ray and skate teeth, remains of marine bony fishes, as well as teeth of terrestrial mammals, worm (clitellate) cocoons, and seeds of water lilies8,12,13,14,15,16,17,18,19,20,21,22. Based on dinocyst occurrences, the age of this deposit is considered to be about 40 Ma (Bartonian, Eocene)23,24. The fossil frog remains were collected during three joint Argentinian-Swedish expeditions to Seymour Island in the austral summers 2011–13. The bone fragments were concentrated from dry-sieved sediment samples as described by8,12,20 and sorted by using a Leica MZ6 stereomicroscope. The material is housed in the palaeozoological collections of the Swedish Museum of Natural History, Stockholm, with the inventory numbers NRM-PZ B281 and B282. Anura Fischer von Waldheim, 181325 Neobatrachia Reig, 195826 Australobatrachia Frost et al., 200627 Calyptocephalellidae Reig, 196028 Calyptocephalella Strand, 192829 Locality, horizon and age IAA 2/95, Marsupial site, Seymour Island, Antarctic Peninsula (64°13′58″S; 56°39′06″W). ‘Natica horizon’ within the Cucullaea I Allomember (Telm 5) of the La Meseta Formation, Bartonian (40 Ma), Eocene23,24 (Fig. 1). The preserved part of the ilium measures 3.9 mm in length, the distance from the tip of the dorsal acetabular expansion to the (preserved) tip of the ventral acetabular expansion measures 3.3 mm, the highest height of the acetabular fossa equals 2.5 mm. The skull bone measures 2.7 mm at its both broadest and longest parts. The fragmentary right ilium (NRM-PZ B282) lacks the caudal portion of the acetabulum and most of the iliac shaft. The dorsal acetabular expansion has a smooth lateral surface and is higher than the preserved part of the ventral acetabular expansion (Fig. 2a). A large and deep supraacetabular fossa is present at its base (Fig. 2a,d). The preserved portion of the acetabulum is concave and its shape allows concluding a (semi-)circular outline. The acetabular rim is most prominent at its anterior part (Fig. 2a,c). The barely-developed ventral acetabular expansion projects ventrally. The posterior-most portion of the ventral acetabular expansion is broken off. However, the anterior portion of the ventral acetabular expansion is higher than the preserved posterior portion. In ventral view (Fig. 2a,c), the lateral surface of the ventral acetabular expansion is convex. The ventral acetabular expansion possesses a shallow and broad depression. In the preacetabular zone, a small and shallow preacetabular fossa is present (Fig. 2c). The preserved portion of the iliac shaft is damaged and precludes a confident statement whether the dorsal protuberance is present or absent. A narrow and shallow longitudinal groove is observable in the lateral surface of the iliac shaft, which probably corresponds to the posterior extension of the ventral depression (sensu10) (Fig. 2a). However, intact parts of bone surface are preserved slightly ventral to the dorsal margin on both lateral and medial surfaces (Fig. 2e,f). The one on the lateral surface is a curved shallow groove and runs posteroventrally (Fig. 2e). This feature anteroventrally demarcates the slightly elevated roughened scar interpreted above as the dorsal protuberance. The area between the dorsal acetabular expansion and iliac shaft is slightly projected dorsally. This area corresponds to the position of the dorsal protuberance. In fact, no clear evidence of a dorsal protuberance can be found on the ilium, only a slightly roughened area with minimal elevation that corresponds to the dorsal protuberance and the scar for the insertion of the musculus gluteus magnus can be observed. At the caudal side of the dorsal protuberance, a distinct notch is visible (Fig. 2a) which we consider as a further evidence of our interpretation. The area corresponding to the dorsal protuberance is located anteriorly to the anterior margin of the acetabular rim. Medially, the entire surface opposing the acetabulum is lost and the area preserved more anteriorly is slightly convex medially and smooth. Anteriorly and dorsally, just adjacent to the anterior end of the dorsal protuberance a foramen is present (Fig. 2b). The fragmentary right ilium can be referred to an anuran based on the following characters30 (the numbers before the characters correspond to the feature numbers of Appendix 1 in Gardner et al.30): 7. (semi-) circular acetabulum; 9. acetabulum with distinct margins; 10. acetabular surface concave; 13. at least dorsal acetabular expansion is strongly divergent; 18. the dorsal protuberance present. Thus, the ilium derives from a small-sized frog (3.8 ± 0.4 cm snout-vent length, see methods, Table 1). The specimen is partly eroded and rather poorly preserved; however, it can be compared with all South American and Australian frog families (Figs. 4, S1 and S2, Table S1). The families Ranidae, Bufonidae and Hylidae have not been illustrated in the present work, since their morphology is well known31 (Table S1). The comparison has been done at family level, since the ilia display diagnostic features characteristic for identification of the family (dimensions of the dorsal and ventral acetabular expansions; location of the dorsal protuberance relative to the anterior margin of the acetabular rim etc.32,33). The studied ilium (NRM-PZ B282) differs in: (1) Reduced anterior portion of the dorsal acetabular expansion from nearly all South American and Australian frog families and the genus Telmatobufo, which have moderately or strongly developed anterior portion of the dorsal acetabular expansion. Only the genus Calyptocephalella (Fig. 4c,e), the families Ranidae31, Pipidae (Fig. S1a), Rhinodermatidae (Fig. S1j), and Leptodactylidae (Fig. S2b) have similar state/morphology of this character. (2) Dorsal protuberance located either at the level of or anteriorly from the anterior margin of the acetabular rim from nearly all families, besides Brachycephalidae (Fig. S1a), Rhinodermatidae (Fig. S1j), Telmatobiidae (Fig. S1k), Hyloididae (Fig. S1m), Leptodactylidae (Fig. S2b) and the genera Calyptocephalella (Fig. 4b–e) and Telmatobufo (Fig. 4g,h). (3) Developed dorsal acetabular expansion from the families Ranidae31, Hylidae31, Bufonidae31, Myobatrachidae (Fig. 4i), Pipidae (Fig. S1i), Microhylidae (Fig. S1b), Telmatobiidae (Fig. S1k), Leptodactylidae (Fig. S2b), Allophrynidae (Fig. S2c), Centrolenidae (Fig. S2d) and the genus Telmatobufo (Fig. 4g,h). Other families have moderately or well-developed dorsal acetabular expansion, however, due to incomplete preservation of the Antarctic frog remain any further comparison is impossible. (4) Weakly developed dorsal protuberance and lack of dorsal tubercle from nearly all families (e.g. Limnodynastidae, Fig. 4j), besides Calyptocephalellidae (Fig. 4b–e,g,h), Myobatrachidae (Fig. 4i), Craugastoridae (Fig. S1e), and Dendrabatidae (Fig. S2e). Among the compared forms, only the South American endemic genus Calyptocephalella resembles all mentioned four characters. In addition to this, a shallow and broad depression on the anterior portion of the ventral acetabular expansion is a unique character observable on our ilium (NRM-PZ B282) and Recent Calyptocephalella gayi (Fig. 4). Further, the fossil ilium displays a ventral depression on its lateral surface anteriorly to the acetabulum (Fig. 2c), a comparable structure can be observed also in the fossil species Calyptocephalella canqueli9 but not in the Recent species C. gayi (Fig. 4). The second bone fragment (NRM-PZ B281) is flat and slightly curved. Both sides of the bone have different structures. One surface is covered by small to large circular or reniform in outline, rather deep pits, which sink in the planar surface of the bone (Fig. 3a). The diameters of pits vary from 0.1–0.7 mm and some of them are punctured by foramina. The opposite surface of the bone is in general smooth, slightly deepened and is pierced with some foramina, some of which are preceded by a groove (Fig. 3b). One side of the fragment preserves an unbroken margin of the original bone with a distinct process that is bent and that gives the bone a curved shape (Fig. 3c). The ornamented surface of the bone projects slightly over this process. Comparable ornamentation, build of pits of different size, is found on the dorsal surfaces of different cranial and postcranial bones of amphibians and reptiles34,35. Among them, the following groups can be excluded from consideration: (1) Albanerpetontidae (Allocaudata); albanerpetontids are a primary Laurasian lissamphibian group with a single occurrence in Northern Africa. So far no evidence of a Gondwanan radiation of albanerpetontids exists36. In addition to this, all their ornamented bones (e.g. frontal, premaxillae)37 do not resemble the bone described here. (2) Caudata; salamanders are also considered as a Laurasian group, with a number of occurrences in Africa which need critical revision38. In salamanders, ornamented bones are found both among skull bones and on vertebrae (on plates located on the tip of the neural arch)39. Bone ornamentation here (e.g. Tylototriton, Chelotriton, Echinotriton39,40) is represented by a network of pits, ridges and pointy spines that do not resemble the bone described here. (3) Crocodylia; in crocodyliforms, comparable patterns of ornamentation with well-developed pits appear only with growth during later ontogenetic stages41,42,43. On one hand, the bone dimensions indicate a small-sized animal (corresponding to a juvenile crocodilian without such developed ornamentation). On the other hand, crocodylian osteoderms are flat without any processes, unlike the studied bone. (4) Testudines; shell plates of several turtles, such as Trionyx, Allaeochelys etc.35, are also covered by ornamentation. The ornamentation is characterized by larger and closely arranged pits, which are not always clearly delimited from each other (see Scheyer35: Fig. 1a). (5) Lacertilia; lizards also have skull bones and osteoderms covered with ornamentation patterns44. They all are characterized by a network of spines, grooves, ridges45 and protuberances46, which differs from the morphology on NRM-PZ B281. The ornamentation pattern found in NRM-PZ B281 is comparable to that of some frog genera, i.e. Thaumastosaurus47, Beelzebufo48, Calyptocephalella9 and Baurubatrachus49, but only the last three genera are Gondwanan forms and, thus, considered for comparison herein. Beelzebufo is a very large form and the ornamentation pattern is present both on skull bones and on vertebrae50. Calyptocephalella9 and Baurubatrachus49 have very similar ornamentation patterns on the surfaces of hyperossified skull bones, comparable to our specimen. A recent phylogenetic analysis49 placed the Late Cretaceous Baurubatrachus within both Recent calyptocephalellid genera Calyptocephalella and Telmatobufo. Though Muzzopappa and Báez10 mention that both Calyptocephalella and Telmatobufo are characterized by a heavily ossified neurocranium, we can confirm this only for the former genus (Fig. 4a,f). Within Calyptocephalella the ornamentation pattern on skull bones is variable. In C. conquella10, it is built either by network of pits in small individuals, or tuberculated ornamentation in adults. In C. satan9 and C. casamayorensis51, ornamented skull bones are slightly larger than NRM-PZ B281 but they have a similar pattern built of pits. C. pichifleufensis48 is known by larger individuals which show similar ornamentation patterns but with larger pits. In comparison to these species, the Antarctic frog displays an ornamentation most similar to that of C. satan9 and C. casamayorensis48. Taking into account our comparison, we conclude that the ornamented bone fragment NRM-PZ B281 represents a skull bone (most probably a nasal) of a small-sized Calyptocephalella or Baurubatrachus. Given the presence of a small Calyptocephalella as indicated by the ilium in the same, only few m2 measuring outcrop, it is most likely that specimen NRM-PZ B281 belongs to the same genus. A comparable record of an ilium and ornamented bones referable to the genus Calyptocephalella has been mentioned in Báez52. Among Recent amphibians, the frogs (Anura) have the widest distribution, covering all continents except Antarctica, where the conditions have been uninhabitable for over tens of millions of years. Contrary to all other continents, no traces of any extant amphibian group, all of which belong to the lissamphibian clade, have been documented from Antarctica. This paper presents the first record of a lissamphibian in Antarctica, with Eocene fossils referable to the order Anura, and most likely to the australobatrachian genus Calyptocephalella. The family Calyptocephalellidae belongs to neobatrachian frogs and is exclusively known from South America53,54. The five extant species, including the monospecific genus Calyptocephalella with hyperossified skull bones, are restricted to the Chilenean Andes54 while most fossil representatives are known from Argentine Patagonia9,48,53. Today, Calyptocephalella inhabits lowland areas of central Chile (upper elevation limit 500 m) east of the Andes within temperate and humid climates, between latitudes 30–43°S. It has an aquatic or semiaquatic lifestyle and populates standing or slow flowing water bodies (lakes, ponds, streams) in the Valdivian temperate Nothofagus forests54,55,56. The oldest fossils referable to Calyptocephalella are known from the Upper Cretaceous of Argentina9,52. During the Paleocene–terminal early Miocene, their geographic range was restricted to Patagonia east of the Andes48,51,53,57. Not until the late Pleistocene did they appear west of the Andes, where they have their endemic present-day distribution54,55,56,57. The clade Australobatrachia comprises Myobatrachoidea (families Myobatrachidae + Limnodynastidae sensu27), nowadays distributed in Australia and south of New Guinea, and the family Calyptocephalellidae (Batrachophrynidae sensu27). Australobatrachia are considered as a stem group of the Hyloidea. The earliest myobatrachoid from Australia is at least as old as early Eocene, based on fragmentary ilia that were referred to the basal extant Lechriodus58. The split between Calyptocephalellidae and Myobatrachidae (Calyptocephalellidae + Myobatrachoidea sensu27) occurred ~100 Ma (~Early-Late Cretaceous boundary)59. Considering the distributions of extant Australobatrachia (Fig. 5), the earliest fossil records10 and the divergence age (from genetic data)59 of both Calyptocephalellidae and Myobatrachoidea lineages, it is clear that Antarctica had played an important palaeobiogeographic role for Australobatrachia and their consequent dispersal. Because the most recent common ancestor of the clade, including Hyloidae and Myobatrachidae + Calyptocephalellidae, occurred in South America, their origin in South America and consequent dispersal from South America to Australia via Antarctica has been suggested59. Additionally, this suggests one more case of strong faunistic affinities of the continent with South America and Australia4,6,16,60. So far, Antarctica has been considered as a dispersal route, but not as a probable place of origin. The new fossil finds support the hypothesis10 that Antarctica may have acted as a center of diversification for australobatrachians. The Seymour Island frog reported herein is the first vertebrate indicative of freshwater habitats on the Eocene Antarctic Peninsula, following invertebrate and plant evidence12,17 (Fig. 6). It is interesting to note that nearly all fossil localities where Calyptocephalella occurs (excepting those, for which fossil plant data are not available) contain evidence of the presence of Nothofagus, including Seymour Island16,60,61. The southern extant range of Calyptocephalella occurs sympatrically with the microbiotherian marsupial Dromiciops gliroides (Fig. S4), also known as “Monito del Monte” or “Colocolo Opossum”, a small mammal with an arboreal lifestyle and an endemic distribution in the dense Valdivian Nothofagus forests of highland Argentina and Chile62. The climate of this Nothofagus forest area with the sympatric occurrences of these two endemic animals shows humid and temperate conditions (for the numerical values, see Methods and Table 2, Fig. S4). Dromiciops gliroides is the only extant species of the order Microbiotheria and is considered as the only South American representative of the superorder Australidelphia which otherwise comprises Australian marsupials63. From the same small shell-rich lens that produced the frog remains reported herein, the fossil microbiotherian Woodburnodon casei has been described64. Hence, we hypothesize that the climatic conditions for the Antarctic Peninsula during the Bartonian (late middle Eocene) should be comparable with the climate found today in the concurrent range of the Calyptocephalella- and Dromiciops-inhabited Nothofagus forests of South America. The fossil finds of a frog and marsupial from Seymour Island, and their fossil and Recent distributions, represent outstanding examples of the role of global climate change on shifting biogeographic ranges. Despite global cooling and the disappearance of the habitats of these groups over large areas from Antarctica to Patagonia, they maintained their relictual occurrence in the Nothofagus forests of the central Chilean Andes. Thus, the Valdivian Nothofagus forest is a unique environment offering habitats not only for Eocene Antarctic refugees but also provides a modern analogue of the Antarctic climate just prior to the glaciation of the southern continent. The studied fossil material has been mounted on aluminum stubs, coated with gold and imaged using a Hitachi S-4300 field emission scanning-electron microscope at the Swedish Museum of Natural History (Stockholm). For the comparison with relevant families (Table S1), the CT data from the website Morphosource65 have been used. The visualization and segmentation of the bone material have been performed using the Amira 9.0 software in Porrentruy, Switzerland. If not otherwise indicated, the osteological nomenclature of this study follows that of Gómez and Turazzini66 for the description of the fossil remains. Body size estimation The values of the snout-vent length (SVL) of Calyptocephalella sp. from Antarctica have been reconstructed using photographs of the skeleton of C. pichileufensis48 and the 3D model of C. gayi (Table 1, Fig. S3). The height of the transition (HT) from the iliac shaft and ilial body (Fig. S3) have been used as reference for comparison to reconstruct the body size of NRM-PZ B282. The ratio of the HT to SVL has been used as a reference to calculate the value of the snout-vent lenth of the individual NRM-PZ B282. We analyzed the climatic parameters of selected stations from the area with sympatric occurrence of Dromiciops gliroides and Calyptocephalella gayi (Table 2 and Fig. S4). Since the upper elevation limit for extant Calyptocephalella distribution is 500 m above sea level, only stations up to this elevation have been considered for climate analysis. This analysis shows a remarkable climatic space with mean annual precipitation ≥ 900 mm, coldest month mean temperature ≥ 3.75 °C, and warmest month mean temperature ≥ 13.79 °C (Table 2). The climatic parameter variations result from both elevation and latitudinal differences, so the temperature increases and the precipitation decreases northwards, whereas an opposite trend is observable at higher altitudes. Pekar, S. F., Hucks, A., Fuller, M. & Li, S. Glacioeustatic changes in the early and middle Eocene (51–42 Ma): Shallow-water stratigraphy from ODP Leg 189 Site 1171 (South Tasman Rise) and deep-sea δ18O records. Geol. Soc. Am. Bull. 117, 1081–1093 (2005). Miller, K. G., Wright, J. D. & Browning, J. V. Visions of ice sheets in a greenhouse world. Mar. Geol. 217, 215–231 (2005). Zachos, J., Pagani, M., Sloan, L., Thomas, E. & Billups, K. Trends, rhythms, and aberrations in global climate 65 Ma to present. Science 292, 686–693 (2001). Woodburne, M. O. & Zinsmeister, W. J. Fossil land mammal from Antarctica. Science 218, 284–286 (1982). Woodburne, M. O. & Case, J. A. Dispersal, vicariance, and the Late Cretaceous to Early Tertiary land mammal biogeography from South America to Australia. J. Mamm. Evol. 3, 121–161 (1996). Gelfo, J. N., Mörs, T., Lorente, M., López, G. M. & Reguero, M. The oldest mammals from Antarctica, early Eocene of the La Meseta Formation, Seymour Island. Palaeontology 58, 101–110 (2015). Kriwet, J., Engelbrecht, A., Mörs, T., Reguero, M. & Pfaff, C. Ultimate Eocene (Priabonian) Chondrichthyans (Holocephali, Elasmobranchii) of Antarctica. J. Vertebr. Paleontol. 36, e1160911 (2016). Schwarzhans, W., Mörs, T., Engelbrecht, A., Reguero, M. & Kriwet, J. Before the freeze: Otoliths from the Eocene of Seymour Island, Antarctica, reveal dominance of gadiform fishes (Teleostei). J. Syst. Palaeontol. 15, 147–170 (2017). Agnolin, F. A new Calyptocephalellidae (Anura, Neobatrachia) from the Upper Cretaceous of Patagonia, Argentina, with comments on its systematic position. Stud. Geol. Salamanticensia 48, 129–178 (2012). Muzzopappa, P. & Báez, A. M. Systematic status of the mid-Tertiary neobatrachian frog Calyptocephalella canqueli from Patagonia (Argentina), with comments on the evolution of the genus. Ameghiniana 46, 113–125 (2009). Reguero, M., Goin, F., Acosta Hospitaleche, C., Dutra, T. & Marenssi, S. Late Cretaceous/Paleogene West Antarctica Terrestrial Biota and its Intercontinental Affinities 55–110 (Springer, 2013). Friis, E. M., Iglesias, A., Reguero, M. A. & Mörs, T. Notonuphar antarctica, an extinct water lily (Nymphaeales) from the Eocene of Antarctica. Plant Syst. Evol. 181, 969–980 (2017). McLoughlin, S., Bomfleur, B., Mörs, T. & Reguero, M. Fossil clitellate annelid cocoons and their microbiological inclusions from the Eocene of Seymour Island, Antarctica. Palaeontol. Electron. 19, 1–27 (2016). Goin, F. J., Case, J. A., Woodburne, M. O., Vizcaíno, S. F. & Reguero, M. A. New Discoveries of “Opposum-Like” Marsupials from Antarctica (Seymour Island, Medial Eocene). J. Mamm. Evol. 6, 335–365 (1999). Bond, M., Reguero, M.A., Vizcaíno, S.F. & Marenssi, S. Cretaceous-Tertiary high-latitude palaeoenvironments: James Ross Basin, Antarctica, (ed. Francis, J. E., Pirrie, D. & Crame, J. A.) 163–176 (Geological Society, 2006). Chornogubsky, L., Goin, F. J. & Reguero, M. A reassessment of Antarctic polydolopid marsupials (Middle Eocene, La Meseta Formation). Antarct. Sci. 21, 285–297 (2009). Bomfleur, B., Mörs, T., Ferraguti, M., Reguero, M. A. & McLoughlin, S. Fossilized spermatozoa preserved in a 50-Myr-old annelid cocoon from Antarctica. Biol. Letters 11, 20150431, https://doi.org/10.1098/rsbl.2015.0431 (2015). Engelbrecht, A., Mörs, T., Reguero, M. A. & Kriwet, J. Eocene squalomorph sharks (Chondrichthyes, Elasmobranchii) from Antarctica. J. S. Am. Earth Sci. 78, 175–189 (2017). Engelbrecht, A., Mörs, T., Reguero, M. A. & Kriwet, J. New carcharhiniform sharks (Chondrichthyes, Elasmobranchii) from the early to middle Eocene of Seymour Island, Antarctic Peninsula. J. Vertebr. Paleontol. 10, e1371724 (2017). Engelbrecht, A., Mörs, T., Reguero, M. A. & Kriwet, J. Revision of Eocene Antarctic carpet sharks (Elasmobranchii, Orectolobiformes) from Seymour Island, Antarctic Peninsula. J. Syst. Palaeontol. 15, 969–990 (2017). Engelbrecht, A., Mörs, T., Reguero, M. A. & Kriwet, J. Skates and rays (Elasmobranchii, Batomorphii) from the Eocene La Meseta and Submeseta formations, Seymour Island, Antarctica. Hist. Biol. 10, 1–17 (2018). Marramà, G., Engelbrecht, A., Mörs, T., Reguero, M. A. & Kriwet, J. The southernmost occurrence of Brachycarcharias (Lamniformes, Odontaspididae) from the Eocene of Antarctica provides new information about the paleobiogeography and paleobiology of Paleogene sand tiger sharks. Riv. Ital. Paleontol. S. 124, 283–298 (2018). Douglas, P. M. J. et al. Pronounced zonal heterogeneity in Eocene southern high-latitude sea surface temperatures. P. Natl Acad. Sci. USA 111, 6582–6587 (2014). Amenábar, C. R., Montes, M., Nozal, F. & Santillana, S. Dinoflagellate cysts of the La Meseta Formation (middle to late Eocene), Antarctic Peninsula: implications for biostratigraphy, palaeoceanography and palaeoenvironment. Geological Magazine, 1–16, https://doi.org/10.1017/S0016756819000591 (2019). Fischer, G. Zoognosia. Tabulis Synopticis Illustrata, in Usum Prælectionum Academiæ Imperialis Medico-Chirurgicæ Mosquensis Edita 1-465 (Typis Nicolai Sergeidis Vsevolozsky, 1813). Reig, O. A. Proposiciones para una nueva macrosistemática de los anuros (nota preliminar). Physis 21, 231–297 (1958). Frost, D. R. et al. The amphibian tree of life. B. Am. Mus. Nat. Hist. 297, 1–370 (2006). Reig, O. A. Las relaciones genéricas del anuro chileno Calyptocephalella gayi (Dum. and Bibr.). Actas y Trabajos I. Congreso Sudamericano Zoología, La Plata 1, 271–278 (1960). Strand, E. Miscellanea nomenclatorica zoologica et paleontologica I-II. Arch. Naturgesch. 92, 30–75 (1928). Gardner, J. D. et al. Comparative morphology of the ilium of anurans and urodeles (Lissamphibia) and a re-assessment of the anuran affinities of Nezpercius dodsoni Blob et al., 2001. J. Vertebr. Paleontol. 30, 1684–1696 (2010). Bailon, S. Différenciation ostéologique des Anoures (Amphibia, Anura) de France 1–41 (Centre de Recherches Archéologiques du CNRS, 1999). Rage, J.-C. Frogs (Amphibia, Anura) from the Eocene and Oligocene of the phosphorites du Quercy (France). An overview. Foss. Imprint 72, 53–66 (2016). Folie, A. et al. Early Eocene frogs from Vastan Lignite Mine, Gujarat, India. Acta Palaeontol. Pol. 58, 511–524 (2013). Clarac, F., Buffrénil, V., de, Brochu, C. & Cubo, J. The evolution of bone ornamentation in Pseudosuchia: morphological constraints versus ecological adaptation. Biol. J. Linn. Soc. 121, 395–408 (2017). Scheyer, T. M., Sander, P. M., Joyce, W. G., Böhme, W. & Witzel, U. A plywood structure in the shell of fossil and living soft-shelled turtles (Trionychidae) and its evolutionary implications. Org. Divers. Evol. 7, 136–144 (2007). Gardner, J. D., Evans, S. E. & Sigogneau-Russell, D. New albanerpetontid amphibians from the Early Cretaceous of Morocco and Middle Jurassic of England. Acta Palaeontol. Pol. 48, 301–319 (2003). Gardner, J. D. Albanerpetontid amphibians from the Upper Cretaceous (Campanian and Maastrichtian) of North America. Geodiversitas 22, 349–388 (2000). Gardner, J. D. & Rage, J.-C. The fossil record of lissamphibians from Africa, Madagascar, and the Arabian Plate. Palaeobio. Palaeoenv. 96, 169–220 (2016). Estes, R. Gymnophiona, Caudata 1–115 (Gustav Fischer, 1981). Schoch, R., Poschmann, M. & Kupfer, A. The salamandrid Chelotriton paradoxus from Enspel and Randeck Maars (Oligocene–Miocene, Germany). Palaeobio. Palaeoenv. 95, 77–86 (2015). Vickaryous, M. K. & Hall, B. K. Development of the dermal skeleton in Alligator mississippiensis (Archosauria, Crocodylia) with comments on the homology of osteoderms. J. Morphol. 269, 398–422 (2008). Alibardi, L. & Thompson, M. B. Scale morphogenesis and ultrastructure of dermis during embryonic development in the alligator (Alligator mississippiensis, Crocodilia, Reptilia). Acta Zool. 81, 325–338 (2000). Buffrénil, Vde Morphogenesis of bone ornamentation in extant and extinct crocodilians. Zoomorphology 99, 155–166 (1982). Estes, R. Sauria terrestria, Amphisbaenia 1–249 (Gustav Fischer, 1983). Čerňanský, A. & Augé, M. L. New species of the genus Plesiolacerta (Squamata: Lacertidae) from the upper Oligocene (MP28) of Southern Germany and a revision of the type species Plesiolacerta lydekkeri. Palaeontology 56, 79–94 (2013). Cicimurri, D. J., Knight, J. L., Self-Trail, J. M. & Ebersole, S. M. Late Paleocene glyptosaur (Reptilia: Anguidae) osteoderms from South Carolina, USA. J. Paleontol. 90, 147–153 (2016). Vasilyan, D. Eocene Western European endemic genus Thaumastosaurus: New insights into the question “Are the Ranidae known prior to the Oligocene?”. PeerJ 6, https://doi.org/10.7717/peerj.5511 (2018). Gómez, R. O., Báez, A. M. & Muzzopappa, P. A new helmeted frog (Anura: Calyptocephalellidae) from an Eocene subtropical lake in northwestern Patagonia, Argentina. J. Vertebr. Paleontol. 31, 50–59 (2011). Báez, A. M. & Gómez, R. O. Dealing with homoplasy: osteology and phylogenetic relationships of the bizarre neobatrachian frog Baurubatrachus pricei from the Upper Cretaceous of Brazil. J. Syst. Palaeontol. 16, 279–308 (2018). Evans, S. E., Groenke, J. R., Jones, M. E. H., Turner, A. H. & Krause, D. W. New material of Beelzebufo, a hyperossified frog (Amphibia: Anura) from the Late Cretaceous of Madagascar. Plos One 9, e87236, https://doi.org/10.1371/journal.pone.0087236 (2014). Schaeffer, B. Anurans from the early Tertiary of Patagonia. B. Am. Mus. Nat. Hist. 93, 41–68 (1949). Báez, A.M. The Late Cretaceous Fauna of Los Alamitos, Patagonia, Argentina (ed. Bonaparte J. F.) 121–130 (Museo Argentino de Sciencias Naturales Bernadino Rivadavia, 1987). Otero, R. A., Jimenez-Huidobro, P., Soto-Acuña, S. & Yury-Yáñez, R. E. Evidence of a giant helmeted frog (Australobatrachia, Calyptocephalellidae) from Eocene levels of the Magallanes Basin, southernmost Chile. J. S. Am. Earth Sci. 55, 133–140 (2014). Vitt, L. J. & Caldwell, J. P. Herpetology. An introductory biology of amphibians and reptiles 1–776 (Elsevier Academic Press, Amsterdam, 2013). Veloso, A., Formas, R.J. & Gerson, H. Calyptocephalella gayi. The IUCN Red List of Threatened Species (2010). Cei, J. M. Batracios de Chile 1–128 (Universidad de Chile, Santiago, 1962). Nicoli, L., Muzzopappa, P. & Faivovich, J. The taxonomic placement of the Miocene Patagonian frog Wawelia gerholdi (Amphibia: Anura). Alcheringa 40, 153–160, https://doi.org/10.1080/03115518.2016.1101998 (2016). Tyler, M. J. & Godthelp, H. A new species of Lechriodus Boulenger (Anura: Leptodactylidae) from the Early Eocene of Queensland. T. Roy. Soc. South Aust. 117, 187–189 (1993). Feng, Y.-J. et al. Phylogenomics reveals rapid, simultaneous diversification of three major clades of Gondwanan frogs at the Cretaceous-Paleogene boundary. P. Natl Acad. Sci. USA 114, E5864–E5870 (2017). Vizcaíno, S.F., Kay, R.F. & Bargo, M.S. (eds.). Early Miocene paleobiology in Patagonia: High-latitude paleocommunities of the Santa Cruz Formation 1–378 (Cambridge Univ. Press, 2012). Francis, J.E. et al. Antarctica: A keystone in a changing world; proceedings of the 10 th International Symposium on Antarctic Earth Sciences, Santa Barbara, California (ed. Cooper, A. K. & Barrett, P.) 19–27 (National Academies Press, 2008). Nowak, R.M. & Dickman, C.R. Walker’s marsupials of the world 1–226 (Johns Hopkins University Press, 2005). Nilsson, M. A. et al. Tracking marsupial evolution using archaic genomic retroposon insertions. Plos biology 8, e1000436, https://doi.org/10.1371/journal.pbio.1000436 (2010). Goin, F. et al. New marsupial (Mammalia) from the Eocene of Antarctica, and the origins and affinities of the Microbiotheria. Rev. Asoc. Paleontol. Argentina 64, 597–603 (2007). Unknown. Morphosource. Available at, https://www.morphosource.org/ (2020). Gómez, R. O. & Turazzini, G. F. An overview of the ilium of anurans (Lissamphibia, Salientia), with a critical appraisal of the terminology and primary homology of main ilial features. J. Vertebr. Paleontol. 36, e1030023 (2016). The world bank groups. Climate Change Knowledge Portal (2017). We thank the Argentine Antarctic Institute (IAA-DNA), the Argentine Air Force and the Swedish Polar Research Secretariat (SPFS) for logistical support in Antarctica; M. de los Reyes for screening and picking; E. M. Friis, S. McLoughlin and P. von Knorring for assistance with the figures; C. Mays and the late J.-C. Rage for helpful comments on an earlier manuscript version; handling editor C. Ohneiser, reviewer T. Worthy and an anonymous reviewer for their careful work which improved the manuscript significantly. This work was funded by the Swedish Research Council (VR grant number 2009-4447) to T.M., the Bolin Center for Climate Research, Stockholm University (RA6 grant) to T.M., the Consejo Nacional de lnvestigaciones Cientificas y Tecnicas (CONICET grant number PIP 0462) to M.R., and the Argentinian National Agency for Promotion of Science and Technology (ANPCyT grant number PICTO 0093/2010) to M.R. The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Cite this article Mörs, T., Reguero, M. & Vasilyan, D. First fossil frog from Antarctica: implications for Eocene high latitude climate conditions and Gondwanan cosmopolitanism of Australobatrachia. Sci Rep 10, 5051 (2020). https://doi.org/10.1038/s41598-020-61973-5
<urn:uuid:886c5305-d07f-4a53-ba3a-0106e84978aa>
CC-MAIN-2021-49
https://www.nature.com/articles/s41598-020-61973-5?utm_source=sciencenet&utm_medium=display&utm_content=article_highlight&utm_campaign=JRCN_4_SM01_CN_s41598-020-61973-5_AH_paid_display_SNET&error=cookies_not_supported&code=c3313773-bf07-4027-a7b7-67ce13e3e48c
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00309.warc.gz
en
0.778352
10,404
3.1875
3
2.98615
3
Strong reasoning
Science & Tech.
Weight Training is truly beneficial, when done right. It can help with building strength, muscle toning, and burning fat. On the other hand, if you do it wrong, you have the ability to injure yourself. Three main things you want to consider is when lifting weights are: your posture and form, the heaviness of the weights, and your breathing. Form: When you use the correct form, you will be less likely to injure yourself, and the weight lifting will be more effective. Just as it is important for you to correctly lift boxes or other heavy items, it’s important for you to lift weights the right way. This includes picking them up, and laying them back down. Also, stand up straight. Good posture and form are key and will help you avoid pain and injuries. Weights: Next, don’t start with super heavy weights. It’s normal to build up the amount of weights you can carry. Start with something that isn’t too light and isn’t too heavy. You should be able to do sets of 15. If you cannot, they are too heavy. When they start to become too easy, you increase the volume or heaviness, but don’t overdo it. You should be in control of the weights at all times. Your body will let you know if it’s more than you can handle. Listen and pay attention to your body signals. If you feel any pain, stop immediately. You may need rest for recovery, or even decrease the amount of weights you are lifting. Strength training isn’t about how fast you can do it, or even how heavy. Safety is priority. Breathing: It’s common for most people to want to hold their breath when they are weightlifting, but this is not a good idea. Weightlifting can be intense, as you are building strength and muscle. So, breathing the right way is critical. It will ensure that your body is getting adequate amount of oxygen as you work out your muscles. Exhale when you lift the weight, and inhale when you move the weight away. Then, give yourself a minute break between sets, before you lifting start again. When strength training, make sure that you give yourself at least a day to recover. It’s okay to lift weights each day, as long as you are working on different muscles groups. For example, you may want to do arms Mondays and Wednesdays, and legs Tuesdays and Thursdays. To get good results, you should be working out your muscles a minimum of two days a week, but also remember to rest your muscles. Also, make sure to do full body strength training. You may want to target certain areas, but it’s best to workout all muscles groups, for best results. Moreover, make sure that you have enough water intake, staying hydrated is another important factor because your muscles are 75% water. If you don’t have any weights, you can do squats, pushups, sit-ups, lunges and planks to work on strength training. You will, again, want to make sure to rest between sets and stay hydrated. We just sent you an email. Please click the link in the email to confirm your subscription!
<urn:uuid:2646f6ca-2af7-4fff-8e07-9ea502c71678>
CC-MAIN-2020-16
https://www.erianton.com/blog/weight-training-eri-anton
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00197.warc.gz
en
0.956283
680
2.625
3
1.089372
1
Basic reasoning
Sports & Fitness
A great article about seasonal allergies from www.parents.com. "Spring Into Allergy Season From Parents Magazine Up to 40 percent of children in the United States suffer from seasonal allergies. Find out what symptoms parents should look for to determine if their kid is suffering from allergies, and what treatments are available. If welcoming the new season means welcoming more sneezing and sniffling around your house, then your kids might be suffering from allergies. As many as 40 to 50 million people in the United States are affected by allergies and at least 35.9 million Americans have seasonal allergies, according to the American Academy of Allergy, Asthma & Immunology. So how can a parent know if their kid just has a cold, or if it's more than that? And what should they do if they do suspect it is allergies? We asked Dr. Todd Mahr, Director of Pediatric Allergy/Asthma/Immunology at Gunderson Lutheran Hospital in La Crosse, Wisconsin, to give us some insight about symptoms, steps parents should take, and treatments for allergies. What are the symptoms of seasonal allergies in kids? They'll have repetitive sneezing, a running nose that is a thin, clear substance ... it's not usually thick and gooey; nasal congestion, an itchy nose, ears, eyes, throat -- so they get the itchies, and watery eyes. For perennial allergies, they'll get more nasal blockage and congestion. They'll have post-nasal drip, which is when mucus drips down the back of the throat and kids will tend to clear their throats a lot. They also do have a runny nose and sneezing but it's less prominent than in kids with seasonal allergies. Keep in mind that it varies from person to person -- one may have more sneezing, another more of a runny nose, another more of the itchies. What's the difference between seasonal allergies and perennial allergies? And when do the different kinds of allergies act up? For seasonal allergies, they occur mainly with pollen so it comes from plants, weeds, grasses, and trees. Many parents will recognize pollen more in the Spring, you know, if they leave their car outside overnight and go out to it in the morning, they'll see a little, yellow dusting on their car ... that's pollen. And if you have pollen allergies, they'll appear when that's in the air. Classically, it comes from trees early in the spring, so in April and May. Then in May, June, and July, it's the grasses that are at their worst ... so people with allergies to various kinds of grasses may feel it more. And then in the Fall, it's the weeds, so ragweed allergies may flare up in mid-August to the end of September. That's classic, but it varies in different parts of the country. For perennial nasal allergies, it means you're dealing with it year-round and these are usually indoor allergies: so it's dust mites, animal dander, cockroaches, molds, and feathers. So individuals may have symptoms occasionally or throughout the year, depending on what kind of allergies they have. What are the common triggers that will bring on allergic reactions? For kids who have allergies, sometimes everyday objects can be the trigger. For example, their favorite pet -- a dog or cat -- could shed dander (tiny pieces of skin), and that may trigger a flare-up. Sometimes the beds can be the trigger, including sheets, mattresses and box springs because that's where dust mites live. So it's not that you're allergic to the bed, it's more the dust mites that are there. There are also triggers that present themselves once kids are in allergy season, so for example, with pollen season, things like cigarette smoke or perfumes can be triggers. Sometimes the weather -- the wind and rain -- can affect the amount of pollen in the air, and thus trigger an allergy flare-up in someone. What should a parent do if they suspect their child has seasonal allergies? The best thing to do is try and keep a little diary answering the questions, "when are the symptoms triggered and by what?" Because when you see your doctor, they will want to know if there is a pattern and will ask you things like, "is it worse during the daytime or nighttime or is it seasonal?" Those answers can give a lot of information to a doctor. Seeing your health care provider is a smart thing to do ... they can then make a determination if you should see an allergist. An allergist can look at the symptoms, do a physical exam and then maybe even do skin testing. Skin testing is when they put small amounts of allergens on the skin, or just below it, and look for a reaction to try to detect what you're allergic to. Once you have tested and can determine what you're allergic to, then you know and can avoid some triggers. What could happen if allergies go untreated? Is there a real danger there? In kids specifically, we see a lot of problems that are related to the congestion caused by allergies. Fatigue, especially during the daytime, poor concentration in school, learning problems and other difficulties in school can all be related to nasal congestion, because kids won't be sleeping as well at night. And then during the daytime, they're blowing their nose a lot and experiencing other symptoms. It can lead to peer pressure and social tension ... you know, they may not want to go out and play because they know if they do they'll start sneezing, and that can lead to some shyness. Because children's bones and teeth are still developing, chronic mouth breathing due to allergy-causing congestion can cause teeth to come in at an improper angle. I get a lot of referrals from orthodontists who see kids for braces and figure out that the kid is a mouth breather. Until they fix that, the orthodontist knows that the braces are going to be on longer. Kids who have allergies are more likely to have ear infections and more sinus infections. Also, if they have asthma, uncontrolled allergies can make asthma worse. And there's been some evidence that it can lead to nasal polyps in the nose. Unfortunately, many kids suffer from nasal congestion, but they don't complain about it. Forty percent of kids have it -- and roughly 2 million school days are lost per year due to this. What are the various treatments for allergies? There are a number of medications, I am sure most parents have heard about the antihistamines -- they help relieve sneezing, itchiness and a runny nose, but doesn't do a good job on congestion; one of the biggest side effects is that it can cause sedation, extreme tiredness. An example of an antihistamine is Claritin, now available over-the-counter, or the generic and less expensive form called Loratadine. As I said, antihistamines don't handle congestion so sometimes people will combine them with decongestants, which can shrink the inflamed nasal tissue and offer relief from nasal congestion. This can be taken orally or by nasal spray. One big caution about using an over-the-counter nasal spray is that people use it too often or for too many days in a row, and then their symptoms can get worse. Don't use it for longer than a few days in a row. An example of an over-the-counter nasal spray is Afrin or Neosynepherine. There are other anti-inflammatory nasal agents that are by prescription only. They are nasal steroids and these manage and cover all symptoms of allergies. They get at the cause, which makes them the best thing for seasonal or perennial allergies. Examples of these are Flonase or Nasonex. The big key for parents to know about these is that they shouldn't confuse them with anabolic steroids. Nasalcrom is a nasal spray that is a mast cell stabilizer and is available over-the-counter and will relieve the sneezing, itching and running nose, but you have to start using it a few weeks before the season starts, and use it three to four times a day. Nasalcrom is not as effective as the anti-inflammatory agents like Flonase or Nasonex, which you use just once a day. Flonase, and nasal anti-inflammatories like it, work at controlling the inflammation that causes the symptoms people have. They are recommended as the first line of therapy for most patients when their symptoms are more than just mild or intermittent. Allergy shots or immunotherapy are another treatment, which should be given through an allergist. What they do is inject a small amount of the allergen that affects you, and it's increased over time until eventually, the patient is on a maintenance dose. This is not a quick fix -- kids who take allergy shots can do it for months or years to achieve benefits. It does change the immune's response, so it's not a medication, but it changes what's occurring. Most people start seeing benefits within about 12 months and stay on it for 4 or 5 years. How can a parent tell that what their child has is more than just a cold? There's no fever associated with allergies. Also, it's repetitive, so if a parent sees a pattern to it, that's a big sign. For example, after your kid comes home from playing with someone with an animal, if they're always miserable after that, that's a sign. If it occurs at certain times of the year or in the morning when they wake up, parents need to look at that and talk to their healthcare professional. What advice do you have for parents going into this spring season? What should they have their kids avoid? If you know your kid has seasonal allergies, especially during pollen season, keep the windows and doors closed. I know it's hard because parents want to open the house up and air it out once Spring comes, but keep it shut. Dry clothes in the dryer ... don't hang clothes outside because then your bed sheets or clothes will be coated with pollen. Also, use the air conditioner, which helps kill dust mites, and by decreasing humidity, helps to keep the pollen out. And if you had water leaks or accumulation over the winter, get them cleaned up so you prevent mold. If you have indoor or perennial allergies, it's more difficult. Don't let the pet sleep in the bedroom, keep the pets off the furniture and bathe them regularly. Using a vacuum with a HEPA filter can be beneficial as well. Remove stuffed animals from the bedroom and wash bedding regularly to alleviate dust mites. Also, you can buy dust mite encasements, which trap the dust mites underneath. You can get those at most department stores or specialty companies."
<urn:uuid:40d13868-beac-4147-91a1-813f85d68315>
CC-MAIN-2019-13
https://kneebees.com/blogs/kneebees-knee-pads/advice-for-parents-this-allergy-season-what-should-their-kids-avoid
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203409.36/warc/CC-MAIN-20190324083551-20190324105551-00188.warc.gz
en
0.965098
2,262
2.640625
3
2.363667
2
Moderate reasoning
Health
One of the reasons I like the 2017 film “Wonder Woman” is that the protagonist’s struggles to understand the nature of good, evil and humanity reflect a common problem humanity has in understanding itself. Unfortunately, the point may be missed because the character seems much more naïve than we are. Psychologists have shown that we can be similarly naïve in a more subtle way. (Contains mild spoilers.) At the beginning of Wonder Woman, it’s easy to see Diana, the protagonist, has a very naïve view of humanity. She believes the story that humans were created to be good and were corrupted by Ares, the god of war. When she hears of the horrors of the First World War, she expects that finding and destroying Ares would make it all stop instantly; the people would come back to their senses and stop doing such monstrous things. It’s easy to guess that, whatever the role of Ares may be, she’s in for a rude surprise. When Diana finally encounters Ares, the Lasso of Truth can’t stop him from saying things about humanity that she finds hard to deny. They are certainly not good beings whose minds were taken over by an evil god. He has given them some ideas, but ultimately, they have chosen for themselves. Ares believes that humans are destructive creatures who should be purged from the face of the Earth. Diana set out with the belief that humans were inherently good and Ares was the source of evil, but after seeing the truth about humanity, she finds that her way of thinking inevitably leads her to dark places – all too close to Ares for comfort. Both characters share an underlying assumption: Beings who are good do good things, and only evil beings do evil things. In reality, it’s mostly not about being inherently good or evil, but about being inherently limited. Diana’s journey prepares her to see both sides of things. She travels with people of the sort she’d consider dishonorable – a spy, a con-man, a sniper, a smuggler – and sees how they’re each trying to get by in a world that leaves little option to be perfectly good. Especially without superpowers. Cynical as we may be, we have some of the naivety Diana starts out with. In his brilliant and startling book Evil, psychologist Roy Baumeister argues that we have an unrealistic tendency to suppose that, when someone does something that harms us, they must be driven by malicious motives that we ourselves could never share. He calls this the “myth of pure evil”, and it’s also related to the fundamental attribution error and banality of evil. It’s no wonder, then, that we seek to answer the question of where evil comes from as if it’s an active power. But we don’t need a literal or metaphorical Satan when we have a world full of people with partly conflicting interests, who frequently don’t understand what harms another person or don’t care as much as they should. Some of the greatest evil stems from the illusion that one is fighting evil, like Ares in the movie, or terrorists who think they’re Luke Skywalker and “enemy” civilians are the Death Star. Even when we don’t actively set out to harm others like that, if we want to avoid ever committing evil ourselves, we must see through the illusion that it’s a force outside of us.
<urn:uuid:dbecc508-f0d5-4ef6-bf46-27140dfb3fc3>
CC-MAIN-2018-34
https://thoughtsonx.wordpress.com/category/philosophy/ethics/
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218101.95/warc/CC-MAIN-20180821092915-20180821112915-00528.warc.gz
en
0.965878
722
2.53125
3
2.909087
3
Strong reasoning
Literature
A new research project on plastic rubbish in waterways is set to investigate the role of New Zealand rivers in transporting damaging plastics to the sea. The three-year study, run by NIWA scientist Dr Amanda Valois, will analyse plastic waste in Wellington's Kaiwharawhara catchment to ascertain how it is entering our fresh water — and how it can be stopped. Valois was inspired to start the research project by the work of the Sustainable Coastlines organisation, which coordinates large-scale coastal clean-up events. "A lot of what they collect on our beaches is plastic, especially single-use plastic," she says. "But cleaning up this waste by the time it's got to the sea is a very inefficient way of dealing with it." Rather than being the 'ambulance at the bottom of the cliff', Valois wants to find out where it's coming from — and find ways to stop it. She applied to the Ministry of Business, Innovation and Employment's Endeavour Fund through its 'Smart Ideas' mechanism, to look at ways to stop plastic pollution at an earlier stage in the cycle. "Most of the focus has been on the marine system but we want to see what role rivers play in carrying plastic waster into the ocean," she says. The Kaiwharawhara stream system begins in Karori, then wends its way through the city's western suburbs and down the Ngaio Gorge, before discharging into Wellington Harbour. "We chose Kaiwharawhara because it is a relatively small catchment so it's manageable to track the different plastic sources and it's also very diverse," Dr Valois says. "It starts off with pristine headwaters in the Zealandia sanctuary but, by the time it weaves its way through the city and gets to the harbour, it's quite polluted. It's a way to see how plastics accumulate across a diverse landscape." While many people think the biggest problem with plastic pollution in waterways is aesthetics — it just looks bad — Valois says it has a much greater effect. "You get plastic wrapping around fish and eels eating it and organisms choking on it or getting stuck in plastic bottles, so it's harming our wildlife; it has an effect on human health as well. We have been looking at ACC reports of people hurt by plastics in the water. When I go into the streams now I always wear water shoes just in case; I think that's sad." Another issue is microplastics — tiny pieces of plastic either from products containing microbeads, or broken down from larger pieces — which may enter the food chain through aquatic animals. Kaiwharawhara was chosen because Wellington's steep topography and regular rain and wind means plenty of plastic refuse finds its way into the stream. That's a phenomenon that Valois says challenges the 'out of sight, out of mind' mindset of even the best recyclers. "People are good at putting their recycling bins out, filled with plastic, but on a windy day you often see that plastic being blown out and onto the side of the road, and into the river," she says. "Once it leaves their yard, people tend to think it's someone else's problem. But it's a big problem, both in the river and then in the ocean. Valois will not be alone picking through the detritus — she leads a team of six scientists working on the project, and is working alongside eight research, community and iwi organisations to gather and analyse the refuse. "We are working closely with community groups and schools and people who live in the area. A lot of rubbish collection has been going on in the catchment already, so we are going to harness that energy to start collecting it in a more exact monitoring way and quantifying it." Valois says the overall aim of the project is to develop standardised methods to find out where the plastic is coming from, how much and what types. "Once we figure out what are the most significant plastic types, we can work with communities to reduce them. We've seen the arbitrary banning of microbeads and plastic bags but we don't know what the other sources of plastic are – so that will provide the best bang for our bucks to reduce them." Following the three-year gathering and analysis project, Valois and her team intend to widen their reach: "In five years' time, if our monitoring and efforts to reduce plastics have worked, we hope what we have found out can be applied to catchments around New Zealand." *Keep track of the project — including its more unusual finds — through Twitter (@what_fish_eat).
<urn:uuid:5bc61912-621d-4191-8f81-0742a5701cd7>
CC-MAIN-2019-18
https://www.nzherald.co.nz/the-vision-is-clear/news/article.cfm?c_id=1504591&objectid=12202760
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530040.33/warc/CC-MAIN-20190420200802-20190420222802-00543.warc.gz
en
0.972255
968
3.4375
3
2.980804
3
Strong reasoning
Science & Tech.
In honor of National Frozen Food Month, Plastics Make it Possible recognized innovations in plastics that help deliver convenient and nutritious frozen foods, particularly those creations that have led to less packaging and food waste. Frozen food packaging has changed dramatically since the 1920s when Clarence Birdseye developed methods for quick freezing foods. While frozen foods today are packaged in many materials, technological advances coupled with the rise of the microwave have made plastics the go-to choice for many frozen foods, from vegetable medleys to ready-to-heat meals to gourmet ice cream. "By helping preserve fresh flavors and nutrients in frozen foods, plastic packaging often leads to less food waste," says Steve Russell, vice president of plastics for Washington, D.C.-based American Chemistry Council, which sponsors the Plastics Make it Possible initiative. "And thin, lightweight plastic packaging also leads to less packaging waste. So, consumers can save more food and grocery money and create less waste." For example, many frozen ready-to-heat meals such as stir-fries now are packaged in thin, lightweight plastics that help preserve freshness. Consumers can create quick and easy meals using minimalist packaging that can be scrunched up to about the size of a poker chip. Some examples of how plastics have contributed to the evolution of frozen foods: Freezer-to-microwave TV dinners. In the early days of frozen TV dinners, meals could take an hour to heat in the oven. But, with the advent of the microwave oven, frozen food makers began packaging frozen meals on trays made with plastics that could stand up to both cold and heat. Dinners now can go from freezer to microwave and be prepared in minutes, requiring less preparation time and energy. And, a growing number of U.S. communities collect these trays for recycling, resulting in less valuable materials in landfills. Airtight freezer foods. Under-protected food stored in the freezer absorbs nasty odors and flavors and then dries out, resulting in "freezer burn" and wasted food. Factory-sealed plastic bags and containers help preserve the flavor, texture and nutrients of food by locking out air. So, consumers can enjoy nutritious fruits and vegetables year-round, buy wild-caught salmon from Alaska and find all sorts of prepared meals that were unavailable only a few years ago, packaged in thin, lightweight plastics. Plastic steamer bags. Many frozen food makers now sell a large variety of side and main dishes in lightweight plastic pouches designed for heating in the microwave. Consumers simply place the frozen package in the microwave, and moisture steams the food inside the plastic pouch, in one simple step with less cleanup and little waste. Consumers themselves also can buy similarly designed plastic zipper bags to easily and quickly steam their own meals in the microwave. "Active" packaging. Sometimes called "intelligent" or "smart" packaging, active packaging helps protect both fresh and frozen food by doing more than simply containing it. For example, antimicrobials can be incorporated into the plastics used in packaging; this can help mitigate the growth of harmful microorganisms, which helps preserve food quality and results in less spoilage and waste. Recycled plastic packaging. Thanks to new recycling technologies, some frozen food makers are using recycled plastics in their packaging. One major frozen food maker uses plastics from recycled plastic bottles in frozen meal trays for several of its food brands; the company says this diverts an estimated 8 million plastic bottles from landfills annually. And of course these plastic trays are lightweight, which reduces fuel consumption in transport. Do-it-yourself frozen foods. Just what did we do before handy plastic zipper bags? Today's consumers can place homemade meals, store-bought foods and leftovers in zipper bags and purge much of the air before freezing (pre-wrapping foods in plastic stretch wrap also helps). To take this concept even further, home vacuum sealers remove nearly all air from the plastic bag prior to sealing, which better protects food to reduce waste. Home vacuum sealing with plastics has grown considerably in recent years; it's particularly popular with warehouse store shoppers and game hunters. For more on new cold packaging materials, check out the Cold Packaging Materials section of Refrigerated & Frozen Foods magazine. For more on frozen food processors, flip through Refrigerated & Frozen Foods’ March 2013 issue, featuring the Top 150 Frozen Food Processors.
<urn:uuid:90bcf07a-5aeb-4fe3-81d6-b5bec70a271d>
CC-MAIN-2021-10
https://www.refrigeratedfrozenfood.com/articles/86723-plastics-make-it-possible-honors-national-frozen-food-month
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00382.warc.gz
en
0.944417
905
3.296875
3
2.531337
3
Strong reasoning
Food & Dining
Wharton: Hill's B-24 Liberator helps tell story Roy • As former Hill Air Force Base commander and retired Lt. Gen. Marc Reynolds graciously explained how he was part of a crew that recovered the rare B-24 Liberator bomber on Alaska's Great Sitka Island that is now on display at the base's fine aerospace museum, MaryAnn Strong approached. The Salt Lake City resident, with grandchildren Mason and Morgan Stevens in tow, told Reynolds that her father, Robert Matheson, was a navigator on the B-24 during World War II. Germans shot down the bomber and her dad spent 14 months in a prison camp. "I thought it is important [to see the plane]," Strong explained. "This was part of our heritageâ¦In our family, it matters a lot." She asked Reynolds where the navigator sat. He took down a barrier and allowed us to crawl under the belly of the big plane to look inside, showing Strong where her father would have worked, right behind the pilot. I came to see the plane after reading author Laura Hillenbrand's best-selling book "Unbroken: A World War II Story of Survival, Resilence and Redemption." The book chronicles the amazing true story of Louis Zamperini, who represented the United States as a distance runner in the 1936 Berlin Olympics. During World War II, Zamperini served as the bombardier on the relatively new B-24, a difficult-to-fly plane with nicknames such as "The Flying Brick," "The Constipated Lumberer" and "The Flying Coffin." "Flying it was like wrestling a bear," wrote Hillenbrand. The planes came on line in December of 1939 and, with 19,286 constructed, were the most-produced heavy Allied bomber of World War II. Hill Field workers started Utah's first progressive assembly line for the B-24 on Feb. 14, 1943. Zamperini was on a rescue flight in the South Pacific when the B-24 called the Green Hornet he was in crash-landed into the ocean. He barely survived the crash and was able to salvage two rescue rafts. He and the plane's captain lived 47 days on the ocean where they nearly starved, died of thirst, were attacked by sharks and were shot at by a Japanese plane. After drifting for more than 2,000 miles, they were captured by the Japanese and spent the remainder of the war in concentration camps, barely surviving. I wanted to see the plane for myself. Calling Hill Field, I found that the museum has restored one of what is believed to be only 20 surviving planes. Volunteer Kay Stowell showed me the Plexiglass B-24 nose where Zamperini would have used his Norden bombsights and where the pilots would have sat. "They could fly higher and carry heavier loads than the B-17, but were not as tough as the B-17," Stowell told me about the B-24, pointing to the nearby B-17 inside the museum's large hangar. As I examined and photographed the plane, Stowell returned and said that Reynolds, who now heads up a volunteer program that restores planes and raises funds for the museum, was in his office. The general kindly told me the history of the plane now in the museum. He said that on Jan. 18, 1943, the plane flown by Capt. Ernest "Pappy" Pruett crash-landed on Great Sitka Island in bad weather. The plane was largely forgotten on the uninhabited island for the next 50 years until it was located by a search party. The Aerospace Heritage Foundation of Utah retrieved it in 1995 and returned it to a restoration facility in California. Pruett joined the general as part of the group that helped to disassemble the plane. "There was a lot of hard, tough walking," said Reynolds, who regularly talks with guests about this and other planes in the large museum. Work on the fuselage was completed in 2002 while the wing came to Hill in November of 2006. The plane was mostly completed in December of 2006, but workers are still restoring parts of it, including the gun turret. Now part of the museum's exhibit, Hill's B-24 Liberator gives family members such as Strong a chance to see the plane fathers and grandfathers flew and to honor their memory. See more about comments here.
<urn:uuid:9267ce66-6455-4ba8-a83e-15884455049c>
CC-MAIN-2015-22
http://www.sltrib.com/sltrib/news/55424749-78/plane-hill-museum-reynolds.html.csp?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930916.2/warc/CC-MAIN-20150521113210-00321-ip-10-180-206-219.ec2.internal.warc.gz
en
0.979756
913
2.5625
3
2.611349
3
Strong reasoning
History
Aluminum is basically separated into two main groups including its alloys that is molding alloys and shaped alloys. Casting alloys are the ones that can be used in the cast in sand, or dispensed into patterns to give them a shape. Aluminum is a very heavy-duty material but it can be melt down when it is exposed to high temperatures. The wrought alloys, which also include the specific 6061 aluminum alloy, are cast and then worked by hands by extruding it, rolling it or by forging them into the wanted shapes. Some of these mixtures can be heated by dissimilar procedures to rise their power and hardness. The stainless steel tube suppliers prefer to use this metal in the manufacturing of roll cages and many other things. This metal is highly used where the corrosion is highly concerned. Most of is well aware of the aluminum by the foils that are usually used for rapping the food products so that they can be remained hot for long hours. Click here for more info on roll cafe in Melbourne. 6061 Aluminum Properties: 6061 aluminum, initially named as the “Alloy 61s”, is one of the initial metals to be advanced. It is one of the most frequently and commonly obtainable heated aluminum alloys for profitable and other uses. The 6061 alloy is mainly made up of aluminum, silicon and magnesium. It is also used in metallic essentials that include iron, copper, zinc, chromium, manganese and titanium, in descendant order of amount. Alloy 6061 is set the normal for a medium strength and it is a cheap material. The alloys used in the early time had been disposed to pressure corrosion cracking, but the totaling of a small quantity of chromium is made up of this alloy that is highly resistant to erosion. 6061 aluminum properties include its physical power and toughness, it has a good surface polish, it has good corrosion resistance due to which it is most appropriate to atmosphere and sea water, its machinability, and its capability to be simply welded and combined. The other aluminum alloys are hard to weld due to its chemical arrangement and absence of conductivity. Here we are going to have a look at the specific qualities of 6061 aluminum which is fortunately that we usually use. It is easily obtainable in the market. The heat at which aluminum can be melted is about 250°C. 6061 Aluminum Uses: 6061 aluminum is used comprehensively as a building material, it is used most commonly in the building of automotive machineries. The 6061 aluminum is suitable to the manufacture of ships, motorbikes, cycle frames, scuba tanks, and lenses for camera, fishing equipment, fittings for electrical items, valves and connectors. It is also utilized in the manufacturing of tin, and the food containers that are inner foil wrapping and is frequently prepared with 6061 aluminum alloy. Aluminum-magnesium-silicon alloys are also used in the wide range of the roof constructions for bridge floors and stadiums.
<urn:uuid:7993b655-e4e0-4a2d-910f-f5f3d96dea60>
CC-MAIN-2023-50
http://landmarkpatents.com/important-information-about-6061-aluminum/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00593.warc.gz
en
0.967364
612
2.765625
3
1.450763
1
Basic reasoning
Industrial
This week is the five year anniversary of the devastating BP Oil Spill in Gulf Coast. Five years ago, on April 20, 2010, an explosion on the Deepwater Horizon oil rig killed 11 Americans. It was one of the worst offshore oil spills in the history of the United States. The BP oil spill what are the impacts? More than 200 million gallons of crude oil was pumped into the Gulf of Mexico for a total of 87 days. Science has revealed that after five years from the day of the oil spill, the effects are still not over; the impacts are ongoing and significant. What kind of impact remains from the spill? It has been marked as the longest “unusual mortality event” by NOAA (National Oceanic and Atmospheric Administration). More than 1,000 dolphins have been found dead since 2010 on the Northern Gulf of Mexico. Massive tar mats still wash up from the oil spill, five years later. But scientists have been worried about much more deep-water impacts, that they are having a hard time measuring, like the impacts on the zooplankton, corals, and many types of marine life that live in the middle depths of the sea, explains EDF Chief Oceans Scientist Douglas N. Rader. “To top it off, all of this occurred near the Mississippi River Delta, an ecosystem already under enormous pressure. This pressure is driven by century-old development choices that favored commerce and development over sustainability of the Delta. And now research has shown that the rate of marsh shoreline erosion increased with oiling.” – Rader said. How is the Gulf Coast Ecosystem being restored? A full recovery will not be easy. It might take years, if not decades – but local residents and leaders are determined to get the gulf coast ecosystem restored, the best they can. Hundreds of restoration projects have long been working to rebuild critical habitats like wetlands and oyster reefs, and the recreational attractions, targeting both the economic and environmental needs. These projects are poised not only to bring back what was lost but also make the Gulf Coast even better than before. We do not know the long-term effects of this major oil spill, and it might be some time before we fully understand the impacts. But we do know that the Gulf Coast is slowly but steadily regaining ground. We need your voice to be heard. Fight back and push the government in the right direction, to do the right thing. To join forces with us, please be a part of the environmental professionals network . We’d love your feedback and comments on this article. Please feel free to add your comments in the comment box below or on our Facebook page, we’d really appreciate it. As our next huddle guest, we are delighted to have Dr. Robert W. Howarth, Professor of Ecology & Evolutionary Biology at Cornell University, who was also called on by the Attorney General of Alaska to be the lead consultant on their response to the Exxon Valdez oil spill. Related articles and resources: - Exxon Valdez Oil Spill and the Environmental Effects | Nourish The Planet - Five years later: Why the oil spill isn’t over | Environmental Defense Fund - Five Years Later, Scientists Gather to Assess Ongoing Impact of BP Oil Spill | Restore the Mississippi River Delta - Cetacean Unusual Mortality Event in Northern Gulf of Mexico (2010-present) :: NOAA Fisheries - International Earth Day April 22 – Celebrate a Clean Earth | Environmental Professionals Network
<urn:uuid:c155ab63-786f-4efb-a5d1-b4a381ab3de6>
CC-MAIN-2020-24
https://ecolonomics.org/bp-oil-spill-why-it-isnt-back-to-normal-five-years-later/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347415315.43/warc/CC-MAIN-20200601071242-20200601101242-00054.warc.gz
en
0.940615
727
3.6875
4
2.963248
3
Strong reasoning
Science & Tech.
The NSF report this story is based on is available online: http://bit.ly/pZnizp CORVALLIS, Ore. – Researchers from a team funded by the National Science Foundation have examined some of last spring’s massive tornado damage and conclude in a new report that more intensive engineering design and more rigorous, localized construction and inspection standards are needed to reduce property damage and loss of life. As one of the nation’s most destructive tornado seasons in history begins to wane, and hurricane season approaches its peak, experts are working to determine if old, tried-and-true approaches to residential and small building construction are still adequate, or if it’s time to revisit these issues. “Modern building codes are not what we would call inadequate, but they are kind of a bare minimum,” said Rakesh Gupta, a professor of wood engineering and mechanics at Oregon State University, and one of the members of the NSF team that traveled to such sites as Tuscaloosa, Ala., and Joplin, Mo. – where a massive EF5 tornado in May killed more than 150 people and caused damage approaching $3 billion. “Beyond that, in the actual construction process, buildings are often not built precisely to codes, due to inadequate construction work or code enforcement,” he said. “We can do better. The damage didn’t have to be as bad as it was. We can design and build structures more rigorously that could withstand wind forces up to 140-150 miles per hour, which would help them better resist both tornadoes and hurricanes.” In their research, the scientists and engineers found that even in the most catastrophic tornadoes, the path exposed to the most extreme winds is very narrow. In the Joplin example, buildings less than one-half mile away probably faced winds in the 130 mph range, which often destroyed them because they lacked appropriate fasteners, tie-downs, connectors, or adequate number of sheathing nails. “Another thing we need to consider more in our building practices is the local risks and situation,” said Arijit Sinha, an OSU professor in the Department of Wood Science and Engineering. “Just as cities like San Francisco adapt their building codes to consider earthquake risks, many other towns and cities across the nation could be creating local codes to reflect their specific risks from hurricanes, tornadoes, high winds or other concerns,” Sinha said. “A national building code may be convenient, but it isn’t always the best for every single town in the country.” Among the findings of the new report: - It’s not possible to economically design wood-frame structures that could resist damage from the highest winds in extreme tornado events, such as EF4 or EF5, but irreparable damage from lesser winds could and should be reduced. - Tornadoes and hurricanes apply different types of forces to buildings, and what will adequately protect from one type of storm event isn’t identical to the other. Implementing hurricane-region construction practices in a tornado-prone region is a good start, but not an end solution. - Vertical uplift, one of the special risks from tornadoes, is often not planned for in traditional construction approaches. - Interior closets and bathrooms can provide some protection at lower wind speeds, but more consideration should be given to construction of “safe rooms” that can save lives in major events. Cost will always be an issue in either new construction or retrofitting of existing structures to better resist these violent storms, the researchers said, but in new construction some of the costs are fairly modest. Thicker plywood sheathing, closer stud spacing such as 12 inches on center, tighter nailing schedules, and more consistent use of inexpensive metal connectors such as “hurricane ties” and anchor bolts could accomplish much to improve safety and reduce damage, Gupta said. Retrofitting of existing homes is much more costly, but still something many homeowners should consider, he said. And although tornadoes and hurricanes have different types of impacts on buildings, the wind speeds of a moderate tornado and major hurricane are similar. Even where cities and towns don’t have more stringent building codes, Sinha said, individuals can and probably should have their blueprints or structures reviewed by licensed engineers to plan adequately for damage from hurricanes, tornadoes, earthquakes or other extreme forces. For reasons that are not clear, 2011 has been one of the most destructive tornado years in history, even in regions of the Midwest and South that experience these storms with regularity. One of the largest outbreaks of severe weather in U.S. history occurred on April 27, including a tornado that hit Tuscaloosa County in Alabama, destroying or severely damaging 4,700 homes. The new report was based on lessons learned from that event. The report was done by a study team supported by the National Science Foundation and the International Associations for Wind Engineering that included researchers from OSU, the University of Florida, University of Alabama, Applied Technology Council, South Dakota State University, and private industry. Video of the damage in Joplin, Mo., is available online, both on YouTube and in a high resolution format:
<urn:uuid:eb29b8fa-e1f3-4ba2-b2ed-abb3bae9f524>
CC-MAIN-2014-10
http://oregonstate.edu/ua/ncs/archives/2011/aug/increase-storm-damage-brings-call-more-stringent-standards
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011205602/warc/CC-MAIN-20140305092005-00069-ip-10-183-142-35.ec2.internal.warc.gz
en
0.954595
1,089
2.921875
3
2.987797
3
Strong reasoning
Industrial
We often come across terms like shares, stock market and phrases like ‘the stock market is up and ‘investment in stocks. But how many of us know what these really mean? Sure, you’re likely to be bombarded with these terms if you turn on a business channel, but many continue to have either little or no certain knowledge about them. Or even worse, sometimes it’s false and misleading information. Our schools and colleges don’t teach us about investment and financial planning. But this is what really matters once you’re out there in the dog-eat-dog world. - A company’s capital is divided into shares in order to sustain, grow, expand or raise funds. - This means that people buy shares with the expectation that the value of the business and so its shares would rise. - Shares can be classified into Equity and Preference shares. These differ in terms of power given to shareholders. But fear not, here are some old school basics about shares we’re happy to share (pun unintended). A company’s capital is divided into shares in order to sustain, grow, expand or raise funds. Each share forms a unit of ownership of a company and is offered for sale to people who look to invest in order to raise capital for the company. Now why would anyone buy shares of a company? Well the obvious reason is: to receive capital gains in the future. This means that people buy shares with the expectation that the value of the business and so its shares would rise. Capital gains can either be achieved through capital investment or through dividends. Shares can be classified into Equity and Preference shares. These differ in terms of power given to shareholders: - Equity shares give the shareholders the power to share the profit in the company as well as a vote in the Annual General Meetings’ of the company. Such a holder has to share the profits of the company or inversely bear any losses incurred by the company. - Preference shares only gives a fixed amount - dividends, from the earnings of the company and usually gives no voting power to holders. There are several ambiguities to the terms shares, stocks and equities and what they do. There isn’t much difference between these terms other than the context in which it is used. When someone says “stocks” it is to denote the ownership certificate of any company in general and if they say “share” it denotes the same to a particular company. Equity, on the other hand, refers to the stock/shares held in a company in its various forms like private equity and so on. The share market is a platform where shares/stocks are sold or traded. However it’s not just shares, but even bonds, mutual funds and derivative contracts that are traded in this market. Again, it is classified into two types - the Primary and Secondary stock market. When a company registers itself for the first time to sell its shares and raise funds, it enters the primary market. This is called the Initial Public Offering or IPO, after which the company becomes public and trades in public. The secondary market is the market where already listed companies trade/sell stocks. An investor buys shares in the secondary market at its current price. It also offers the investor an opportunity to sell all its shares and exit the market. Learn more about Primary and Secondary Stock Market. An Introduction to the Indian Stock Market India may be not be the best competition when it comes to global investment opportunities and market caps, however the potential that it presents in terms of growth is vast. Investing in the Indian Stock Market[#76 - Share Market Investment Tips] is not such a bad idea provided you play it smart. The two most basic terms one needs to familiarise with in the Indian stock market are the BSE and NSE. Trading in the stock market takes place in two stock exchanges - the Bombay Stock Exchange (BSE) and the National Stock Exchange (NSE). Both are rivals in the stock exchange market, however they have the same process and trading mechanisms. Almost all the significant firms of India are listed on both the exchanges. NSE has a dominant share in spot trading and is almost a total monopoly player in derivatives trading. MCX (Multi Commodity Exchange of India Ltd.) and NCDEX (National Commodity & Derivatives Exchange) are two of India’s commodities’ exchanges. How does the Stock Market Work? To many people, the stock market sounds like a scary, complicated entity that cannot be understood. But here’s some basic knowledge that might change that perception. Companies list themselves either in the primary or secondary market to raise funds or capital. The company has to give details about its business, financial status and the stocks being issued (IPO). Once listed, the stocks issued can be traded by the investors in the secondary market. This is where most of the trading happens. In this market, buyers and sellers gather to conduct transactions to make profits or cut losses. However there are thousands of investors, and in order to extend its coverage we have stock brokers who act as intermediaries. They send the order to the exchange. The exchange finds a seller, after which the confirmation is sent back to the broker and the broker finally debits/credits your accounts. As and when trades are conducted, share prices change. This is because prices of shares – like any other goods – are dependent on the perceived value. This is reflected in the rise or fall of demand for the stock. As demand for the stock increases, there are more buy orders. This leads to an increase in the price of the stock. To summarise the steps: - An order is placed. - Broker sends the order details to the exchange. - The exchange looks for the seller to confirm. - The exchange confirms the order to the broker. - Trading happens - money is exchanged. This almost seems like placing an order in Flipkart and Myntra. Well, that’s the basic process for you. The stock market might seem like a complicated avenue at first. However, it is necessary to know what it is and how it works, as all of us have a common goal of successful financial planning. Investing in the share market might seem less of a risk when you understand what it is all about. - A company’s capital is divided into shares and these shares are sold or traded to raise funds or grow. - There are Equity and Preference shares. Equity shares give the holder voting power and the holder has to share the profit of the company. Preference shares usually give no voting power but provides fixed amount from profits-dividends. - Share Market is the market where shares/stocks are traded, bought or sold. When a company registers itself for the first time to sell shares (called IPO), it trades in the Primary market. Secondary market is where companies that are already listed involve in trading stocks. - The trading in the stock market in India takes place in two stock exchanges- the Bombay Stock Exchange (BSE) and the National Stock Exchange (NSE). - Share prices change as they are dependent on the perceived value of one unit of a company. If the demand of the stock rises, the buy order will increase and so does the price. Tell us about you Find us at the office Kajioka- Constanza street no. 39, 50889 Kuala Lumpur, Malaysia Give us a ring +59 850 269 756 Mon - Fri, 10:00-14:00
<urn:uuid:624ccc3f-7477-4936-a038-2873d514ae53>
CC-MAIN-2021-39
http://blog.nanapi.co.jp/gowidepuc9346.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00453.warc.gz
en
0.95815
1,572
3.078125
3
2.175298
2
Moderate reasoning
Finance & Business
American Flags in Ely. 100% Made in America! The U.S. flag is a solid icon of American identification and nationwide pride. Referred to as Old Glory, the U.S. flag has a vibrant history as well as has actually undertaken many adjustments from the initial main flag of 1777. Today the flag consists of thirteen straight stripes, seven red alternating with six white stripes. The colors of the flag are symbolic as well: red symbolizes hardiness and also valiance, white symbolizes pureness as well as virtue and blue represents vigilance, willpower and justice. In time, some have actually connected slightly different meanings to the three shades, for example, the color red representing the blood spilled to protect our freedoms, but the significance of the original significance has been relatively constant since 1782. Exactly how did the American flag become just what it is today? It is a lot more than the 3 colors or a “decoration”. Think about the places around the world that the American flag has flown, consider the change it has undergone throughout the years on American land. It is truly humbling to consider all that was given as well as sacrificed so that the American flag could fly freely across this country. The flag that started with just 13 stars expanded to 50 with the addition of new states to the Union. The variety of stars on the flag slowly increased to its present number today wherein a new star would be included in the blue field on the 4th of July after the day of each new state’s admission. The number of alternating straight red as well as white stripes has actually continued to be at thirteen except from 1795 to 1818 when fifteen stripes showed up on the flag to note the admission of Kentucky and also Vermont to the Union. In 1818, it was determined that adding a stripe to the flag for every brand-new state would certainly no longer take place as it will certainly make the flag appear crowded and also it would certainly make the flag unwieldy. It was agreed then that the flag would certainly go back to containing just thirteen stripes to represent the initial colonies. The American flag is a sign not only of hardiness, valor, purity, innocence, vigilance, determination as well as justice; it is an icon of freedom. Freedom that has been fought so hard for over the decades. Liberty that has actually cost this nation and the households within a lot, and yet it is still a beacon to those wanting they had the freedom that the country has. Folding the American flag. Traditional flag etiquette recommends that before an American flag is stored or raised, its handlers should two times fold it in half lengthwise; after that (from the end opposite the blue area) make a triangular fold, continuously fold it in a triangular patter til the other end is reached. This makes a triangular “cushion” of the flag with only heaven starred field showing outside, and it takes thirteen folds to create: two lengthwise folds and also eleven triangular ones. The flag isn’t folded up in this fashion since each of the folds has a special symbolic meaning; the flag is folded this way since it gives a sensible ceremonial touch that identifies folding a flag from folding a regular object such as a bed cover, as well as because it results a visually pleasing, easy-to-handle shape. This thirteen-fold procedure was a common technique long before the creation of a ritualistic assignation of “implying” to each of the steps. An intricate flag folding event including these definitions has actually from then on been designed for unique celebrations such as Memorial Day as well as Veterans Day. These meanings are “genuine” in the feeling that they imply something to the people that take part in the event, however they are not the reason a flag is folded in the conventional thirteen-step manner. This is America and its sign is the American Flag. Also though several Americans in Minnesota proudly fly the flag outside their houses and organisations every day, it is fitting that we, as a nation, have established apart one details day each year to honor our flag and also to keep in mind that it represents the perfects and also values that we need to strive to maintain. Ely ZIP codes we serve: 55731
<urn:uuid:1cfc97d2-320c-476f-b93a-853091310fd6>
CC-MAIN-2018-43
http://americanflaginfo.net/us-flags-sale-ely-mn-55731.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511216.45/warc/CC-MAIN-20181017195553-20181017221053-00271.warc.gz
en
0.970882
879
2.671875
3
2.351332
2
Moderate reasoning
History
Click Image to Enlarge Amorphous polymers are generally considered to be a molder’s insurance policy against the problems of dimensional stability. Amorphous structures undergo a smaller and more predictable change in volume as they cool from the melt to the solid, making them easier to mold to close tolerances. If you study the shrinkage behavior of a part molded in an amorphous material, you will also find that the part achieves stable dimensions in a shorter period of time. However, over time amorphous polymers also undergo a slow and subtle structural change that can result in continued shrinkage. This process is known as physical aging and it was not well understood in polymers until the 1970s. Any process that involves melting and re-solidifying a polymer involves a compromise between achieving the perfect structure and producing a part that can be sold at a price that the market is willing to pay. Optimal structural stability is achieved by allowing the polymer chains in a system to reach their ideal configuration in terms of spatial separation. In a semi-crystalline material, as we have already discussed, this means attaining an optimal degree of crystallinity. In amorphous polymers, where no significant level of crystallinity is obtained, the ultimate objective is something called thermodynamic equilibrium. In both cases this involves allowing the polymer chains to reach their ideal arrangement at the molecular level. This is typically achieved by maintaining the material at an elevated temperature for a prolonged period. Unfortunately, perfection requires too much time to be practical from an economic standpoint. So the objective of good process control is to achieve a level of structural stability that is adequate for applications in the real world. When a part is molded in an amorphous polymer, the material has not reached thermodynamic equilibrium. It spends the rest of the life of the product trying to get there. Physical aging can be thought of as the slow contraction of the polymer as the chains get closer, collapsing into the excess free volume that remained when the part was first produced. This closer approach produces a structure that is stronger and stiffer than the original matrix, however it also loses toughness in the process. The rate at which physical aging occurs is a function of the difference between the application temperature and the glass-transition temperature (Tg) of the polymer. The closer the temperature is to the Tg the more rapidly physical aging occurs. So it is not surprising, in retrospect, that one of the first practical implications of physical aging was observed in an amorphous polymer with a relatively low Tg, PET polyester. The amorphous PET used to mold preforms and then blow beverage bottles has a Tg of about 172 F (78 C). At room temperature, the degree of physical aging needed to produce a measurable change in mechanical properties can be as long as one to two years. However, when bottles were stored in a warehouse at 120 F (49 C), this reduced the gap between the application temperature and the Tg by about 50% and accelerated the rate of physical aging by approximately an order of magnitude. Bottles stored under these conditions exhibited a measurable loss in impact strength in just a few weeks. Physical aging only occurs in what material scientists refer to as glasses, which are essentially any amorphous material that exists as a “solid.” All commercial polymers, even semi-crystalline ones like polyethylene and nylon, contain some amorphous regions and exhibit a corresponding glass transition. Therefore, technically it is possible for semi-crystalline materials to undergo physical aging and the associated changes in properties. However, in most cases these changes are overshadowed by the contributions from the crystalline phase. But some semi-crystalline materials achieve a relatively low level of crystallinity that is highly dependent upon processing conditions. These tend to be the high-performance materials such as PEEK and PPS, where failure to cool the polymer at an appropriate rate will result in an almost completely amorphous structure. In these materials physical aging can occur and has been observed along with the associated property changes. Because physical aging involves a change in volume, it can be detected as a change in dimensions. This represents a relatively small dimensional change, as a percentage of the size of the part, so detection either requires making very precise measurements or the part must be very large so that this small percentage results in a readily measurable difference. Precise dimensions are not demanded in a beverage bottle, but there are industries that make measurements in microns. They have observed that parts molded in amorphous polymers with low glass-transition temperatures, such as rigid PVC, continue to exhibit a very small degree of dimensional change over a period of months. These changes are much smaller than the ones that occur due to the solid-state crystallization that we discussed previously, but they can still result in a part that drifts out of print over an extended period of time, even though the parts are only exposed to room temperature. This process can be slowed down if the parts are stored at sub-ambient temperatures, but it cannot be stopped completely unless the temperature is lowered to the point where the polymer undergoes another more subtle transition called the beta transition. For most polymers this transition takes place at a very low temperature that would not be practical for part storage and certainly will not be encountered in most application environments. For example, the beta transition in PVC occurs near -58 F (-50 C) and in polycarbonate it is at -148 F (-100 C). Up to this point, we have discussed long-term behavior that causes molded parts to become smaller over time. But there are some long-term influences that actually cause parts to grow. In the next installment we will discuss this behavior and look at the mechanisms behind it.
<urn:uuid:edfdb580-c6e5-4b25-a716-9639816c253b>
CC-MAIN-2016-18
http://www.mmsonline.com/columns/dimensional-stability-after-moldingpart-3
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00210-ip-10-239-7-51.ec2.internal.warc.gz
en
0.953421
1,181
2.828125
3
2.946689
3
Strong reasoning
Science & Tech.
The background of the United States is large and complicated, however can be broken down right into moments as well as time periods that separated, linked, and altered the United States into the country it is now. The American flag really did not look like it does currently. Apart from that, it went through a lot of adjustments and alterations. The American Revolutionary War Enter the American Revolution. Occasionally described as the American War of Independence, or the Revolutionary War, it was a conflict which lasted from 1775-1783 as well as permitted the original 13 colonies to stay independent from Great Britain. Starting in Great Britain in the late 1790s, the Industrial Revolution at some point made its way to the United States and also transformed the emphasis of the country’s economic situation and the means it manufactures products. For greater than 10 years prior to the episode of the revolution in 1775, tensions had been developing in between colonists as well as the British authorities. These stress arose from growing tensions between citizens of Great Britain’s 13 North American colonies and also the early american government (which represented the British crown). Attempts by the British government to increase earnings by collecting tax from the colonies (especially the Stamp Act of 1765, the Townshend Tariffs of 1767 and the Tea Act of 1773) consulted with negative objection among many colonists, who resented their absence of depiction in Parliament and also required the very same legal rights as various other British people. Colonial resistance caused violence in 1770, when British soldiers opened fire on a mob of colonists, killing five people in just what was referred to as the Boston Massacre. After December 1773, when a band of Bostonians impersonated Mohawk Indians boarded British ships and disposed 342 chests of tea into Boston Harbor, an outraged Parliament passed a series of actions (referred to as the Intolerable, or Coercive Acts) created to reassert royal authority in Massachusetts. The Continental Congress convened in May 1775 as well as accepted to raise an army. George Washington was made its commander in chief. Congress hoped they could require the British to work out but George III refused to compromise. Rather, in August 1775 he proclaimed that the American colonies were in a state of disobedience. Meanwhile, rule by royal governor broke down and individuals demanded government without royal interference. In May 1776 Congress determined that royal government must cease and also government should be ‘under the authority of the people’. Subsequently the colonies drew up state constitutions to change their charters. By June 1776, with the Revolutionary War in progress, a growing majority of the colonists had come to prefer self-reliance from Britain. That very same year Richard Henry Lee of the Virginia Assembly offered Congress with resolutions stating the independence of the colonies, asking for a confederation as well as shared the need to find international allies for a battle versus Britain. On July 4th, the Continental Congress voted to take on the Declaration of Independence, composed by a five-man board consisting of Franklin and also John Adams yet written primarily by Jefferson. By the autumn of 1781, the American forces had actually managed to require the enemy to retreat to Virginia’s Yorktown peninsula, near where the York River empties into Chesapeake Bay. Backed up by a French army commanded by General Jean Baptiste de Rochambeau, Washington moved against Yorktown with a total amount of around 14,000 soldiers, while a fleet of 36 French battleships offshore avoided British reinforcement or escape. Caught as well as overpowered, the opponent was forced to surrender their whole military. Claiming disease, the British general sent his replacement, Charles O’Hara, to give up; after O’Hara came close to Rochambeau to surrender his sword (the Frenchman deferred to Washington), Washington gave the nod to his very own deputy, Benjamin Lincoln, that approved it. After French help aided the Continental Army require the British surrender at Yorktown, Virginia, in 1781, the Americans had actually efficiently won their freedom, though combatting would not officially finish until 1783. The motion for American freedom effectively won at Yorktown, contemporary historians did not see that as the definitive triumph. British as well as American arbitrators in Paris signed initial peace terms in Paris late that November, and also on September 3, 1783, Great Britain officially recognized the independence of the United States in the Treaty of Paris. How the American Flag happened The American flag was made to represent the new union of the thirteen initial states: it would certainly have thirteen stripes, alternating red and also white, and thirteen stars, white on a blue field. One of the initial flags had the stars organized in a circle, based upon the idea that colonies were equal. The thirteen stripes, laid out side by side, stood for the struggle for freedom; red represented valor, white signified pureness and blue represented loyalty. In 1818, after a few style changes, the United States Congress chose to maintain the flag’s original thirteen stripes and also include new stars to show each brand-new state that joined the union. While there is no question that the real Betsy Ross deserved interest in her very own right, it is the tale of Betsy stitching the initial stars and stripes that has actually made her a memorable historic number. The Betsy Ross tale was offered public attention in 1870 by her grandson, William Canby, in a speech he made to the Historical Society of Pennsylvania. Canby and also other participants of Betsy’s family authorized vouched testimonies stating that they listened to the story of the production of the very first flag from Betsy’s very own mouth. Based on the narrative history, in 1776, three men – George Washington, Robert Morris, and also George Ross, visited Betsy Ross in her upholstery store. She escorted them to her parlor, where they could have a private meeting. Right here, Washington pulled a folded paper from his inside jacket pocket. On it, was a drawing of a flag with thirteen red as well as white stripes and also thirteen 6-pointed stars. Washington asked if Betsy might make a flag from the design. Betsy reacted: “I don’t know, however I will certainly try.” This line was used in the vouched statements of a number of Betsy’s family members, suggesting that it is a straight quote from Betsy. As the story goes, Betsy suggested altering the stars to 5 points instead of 6. She showed them how you can do it with just one snip of her scissors. They all agreed to transform the layout to have stars with 5 points. Some historians believe that it was Francis Hopkisnon who gave birth to the idea of the Stars and Stripes. Francis Hopkinson was a prominent patriot, a lawyer, a Congressman from New Jersey, an endorser of the Declaration of Independence, poet, musician, as well as identified civil servant. He was designated to the Continental Navy Board on November 6,1776. It was while serving on the Continental Navy Board that he transformed his attention to creating the flag of the United States. The use of stars in that style is believed to have actually been the result of an experience in the battle directly pertaining to his propriety. A book in Hopkinson’s library at his home in Bordentown was taken by a Hessian soldier in December 1776, a dark year of the battle. The book, Discourses on Public Occasions in America (London, 1762) by William Smith, D.D., had been a gift to him by the writer. The soldier, one I. Ewald, composed on the inside cover that he had seen the author near Philadelphia and that he, Ewald, had taken the book from a fine country seat near Philadelphia. The book was subsequently given to somebody in Philadelphia that returned it to Hopkinson. The soldier had written above as well as listed below Hopkinson’s bookplate, which had three six pointed stars and also his household motto, “Semper Paratus”, or “Always Ready”. The secure return of the book may well have actually represented to Hopkinson the revival of the Americans’ hope. In a letter to the Board of Admiralty in 1780 Hopkinson asserted that he had actually made “the flag of the United States of America” as well as a number of ornaments, devices, and checks appearing on bills of exchange, ship documents, the seals of the boards of Admiralty and also Treasury, as well as the Great Seal of the United States. Hopkinson had received absolutely nothing for this job, and also now he submitted a bill and asked “whether a Quarter Cask of the public wine” would certainly not be a sensible and appropriate reward for his labors. Nevertheless, nobody can be so sure that developed the American flag. The American flag is the spiritual symbol of the nation. It represents the residents’ birthright, their heritage of freedom bought with blood as well as grief. The title deed of liberty, which is the nation’s to take pleasure in and keep in trust for posterity. Timeless alertness is the price of freedom. As you see the flag silhouetted in front of the tranquil skies of the country, you are reminded that the American flag stands for what you are – no more, no much less. Top American Flags near Wisconsin state As quoted from the Star Spangled Banner: O say can you see, by the dawn’s early light, What so proudly we hailed at the twilight’s last gleaming, Whose broad stripes and bright stars through the perilous fight, O’er the ramparts we watched, were so gallantly streaming? And the rockets’ red glare, the bombs bursting in air, Gave proof through the night that our flag was still there; O say does that star-spangled banner yet wave O’er the land of the free and the home of the brave? ZIP codes in Gillett we serve: 54124
<urn:uuid:dfb97a3b-dcf4-4c58-aaf9-b3e6bbcd7edf>
CC-MAIN-2017-47
http://americanflaggroup.com/american-flag-around-gillett-wi-54124.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807044.45/warc/CC-MAIN-20171123233821-20171124013821-00740.warc.gz
en
0.975668
2,081
3.53125
4
2.924512
3
Strong reasoning
History
Are you right brained or left brained?Psychologists tell us the two sides of our brain each control different parts of our learning and actions. The left side of the brain is analytical and logical, the right side of the brain controls creativity and expression. Most of us probably rely on one side of the brain to control how we approach life and problem solving more than the other. Today we are going to look at putting as the final segment of improving your short game to score better. The short game, and most especially putting, is dominated by feel. And that is where that right brain, left brain thing will come in later.More strokes in the average round of golf are taken with the putter than any other club. Yet there seems to be more ways to putt well than there are brands of golf balls on the market. Putting is the most unique and individualized stroke you will find in the game of golf. You will see great putters who use different grips (like the claw or the left hand low), different styles of putters (belly putter, long putter) and even different stances, but the basics of putting hold true.Lets face it, most golfers will do their first putt on most greens from a distance of more than 10 feet. We occasionally stick an approach shot within a couple feet of the hole, but our normal putt will be something more substantial. From 10 feet, even tour pros make less than 50 percent of their putts, and their percentage drops to less than 25 percent when the distance is increased to 15-20 feet.Of course, amateur make rates are even lower from these distances. But what Tour players do so well is they rarely three putt, because if they miss their first putt, their distance control is so good they are left with less than 2 feet for their next effort.Although line is important for your attempt to make the putt, if you miss, be sure you are left with an easy tap-in by learning to feel the distance or speed of the putt.Lets build an illustration of this concept. Assume you have a 20-foot putt that breaks 2 feet, and you make a 25 percent error in your putt. The 25 percent error on the putt would result in a second putt of just 12 inches if the error is based on line. But the 25 percent error in distance or speed would result in a second putt of 5 feet.But what can we learn from that right brain/left brain thing that will help us with line on putts, and maybe even some with distance control, too? Most golf teaching professionals will categorize putters into two groups 1) line putters and 2) feel putters. Line putters base their putting decision based on an evaluation of the green break, speed and grain. After analyzing this, they select a line for the putt by envisioning a path the putt will follow and then putting to a spot which is at the apex of the ball path, hence the name line putter. Sounds kind of analytical doesnt it or maybe a left brain dominated approach.The second type golfer uses innate feel to judge the putt and the conditions of break, grain and speed. This golfer is much like the archer who shoots an arrow instinctively, without a bow sight, and automatically adjusts for the trajectory of the arrow. This golfer is a feel putter, and draws upon his/her right side of the brain for the feel and creativity that is needed.Interestingly enough though, both types of golfers can be equally successful in their technique, especially if they tune into the other side of their normal thinking process to help control distance in putting.n Brad Ruminski is a certified PGA Golf Professional at the Rolling Acres Golf Club (419-652-3160).
<urn:uuid:857f41aa-c45d-435a-8d35-ded69924e7b4>
CC-MAIN-2017-39
http://www.times-gazette.com/sports/20060706/on-course-putting-stroke-unique-to-each-player
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689752.21/warc/CC-MAIN-20170923160736-20170923180736-00159.warc.gz
en
0.950054
783
2.71875
3
2.578338
3
Strong reasoning
Sports & Fitness
|Owner||W.S. Bullard, Boston| |Builder||Hugh R. McKay, East Boston| |Class and type||Clipper| |Tons burthen||1254 tons| |Length||192 ft (59 m)| |Beam||39 ft (12 m)| |Draft||23 ft (7.0 m)| Ganges was an 1854 clipper ship built by Hugh R. McKay in East Boston. Although she was famed for a race with Flying Cloud and Bald Eagle, the race actually never took place. Her captain, George Blunt Wendell (1831–1881) of Portsmouth, New Hampshire, who had apprenticed in the counting house of Goodwin & Coues, was known for his business acumen, unlike many ship masters of the day, who were "found to be a thorough seaman and smart navigator, but a poor merchant." The tale of Ganges' race with Flying Cloud and Bald Eagle is an example of how a gripping sea story can have no basis in fact. Arthur H. Clark recounts it as follows: In 1851, two of the fastest clippers, Flying Cloud and Bald Eagle, left Whampoa laden with tea just two or three days after the Ganges. Ganges appears to have beaten her two rivals to Anjer, in a strong southwestern monsoon, and arrived at the English Channel on 16 December. On the following morning at daylight we were off Portland, well inshore and under short sail, light winds from the northeast, and weather rather thick. About 8 A.m. the wind freshened and the haze cleared away, which showed two large and lofty ships two or three miles to windward of us. They proved to be our American friends, having their Stars and Stripes flying for a pilot. Captain Deas at once gave orders to hoist his signals for a pilot also, and as, by this time, several cutters were standing out from Weymouth, the Ganges, being farthest inshore got her pilot first on board. I said that I would land in the pilot-boat and go to London by rail, and would report the ship that night or next morning at Austin Friars. (She was consigned to my firm.) The breeze had considerably freshened before I got on board the pilot cutter, when the Ganges filled away on the port tack, and Captain Deas, contrary to his wont, for he was a very cautious man, crowded on all small sails. The Americans lost no time and were after him, and I had three hours' view of as fine an ocean race as I can wish to see; the wind being dead ahead, the ships were making short tacks. The Ganges showed herself to be the most weatherly of the three; and the gain on every tack inshore was obvious, neither did she seem to carry way behind in fore reaching. She arrived off Dungeness six hours before the other two, and was in the London docks twenty-four hours before the first, and thirty-six hours before the last of her opponents.' " However, the Ganges in question is identified in various sources as a British ship, not an American clipper. Clark adds: "It is always unpleasant to spoil a really good story, but in this instance I feel constrained to point out that the Flying Cloud arrived at San Francisco on August 31, 1851, after her famous passage of 89 days from New York; it is therefore difficult to understand how she could have sailed from Wampoa on the Canton River on or about September 1st of that year, as stated by Mr. Cowper; while the Bald Eagle was not launched until 1852."
<urn:uuid:22181e1a-1ea6-4a09-9430-c09e2daf8d2f>
CC-MAIN-2022-33
https://db0nus869y26v.cloudfront.net/en/Ganges_(clipper)
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00676.warc.gz
en
0.979837
825
2.53125
3
2.628229
3
Strong reasoning
History
Language ENvironment Analysis Through language-enhanced parent education, we hope to enhance the knowledge you already have of child development particularly in language and social-emotional development through LENA Research coupled with Nurturing Parenting and Adverse Childhood Experiences Awareness group sessions. Parent/caregiver groups are conducted once per week at a community location near you. This is a program that helps parents/caregivers understand the importance of talking with their children, early brain development, and literacy for early care through back and forth conversational turns. Participants are provided childcare, refreshments, 10 books for the child, and a weekly gift card for participating. Why is talking with children so important? More than 20 years of research show a relationship between the amount of language children experience and their brain development. Recent brain imaging studies indicate that conversational turns in particular — back and forth alternations between a child and an adult — have unique brain-building power. During the first three years of a child's life, these back and forth conversations are a significant factor driving brain growth and school readiness. Why does LENA focus specifically on early talk, rather than other factors that may affect children's development? When adults engage in high-quality "serve and return" interactions with children, they build a healthy foundation of brain architecture and neural connections that will last a lifetime. LENA Start is designed to help caregivers boost interactive talk with children in anticipation of many cascading benefits for the childlike better social-emotional health and school readiness — that will lay the groundwork for future success. Is there variability in how much different families talk with children? Yes. Studies that used LENA technology to understand the home language environment significant differences in how much families talk with their children. Even in the same household, talk levels vary significantly during the course of the day and from week to week. That's where LENA comes in — our language-measurement technology is an objective tool to help parents understand, measure, and increase conversations with children. Visit one of our partners for more information: LENA classes are always enrolling, call 901.678.3589 to find out more information. Click on the logo below to learn more about each program. LENA Start - Pha'Meshia Calico, Coordinator LENA Start is a program for parents that uses regular feedback from LENA's "talk pedometer" technology to help increase interactive talk to close the early talk gap, improve school readiness, and build stronger families. Over the course of 10 weekly sessions, parents and caregivers learn about the importance of interactive talk along with ways to incorporate more conversation into their daily routines. The program combines the use of LENA technology to measure the home language environment with parent group meetings that teach simple techniques to improve the quantity and quality of adult-child talk. Click here for a printable flyer. If you are interested in referring a client or participating in this program, please download, complete then save the form below, and e-mail Sonja Randall. You can also fax this form to 901.678.1665. Childcare Admin, Family Day Homes, and teachers scan below to join our LENA Grow community for updates and announcements. LENA Grow - Ashley Foster, Coordinator LENA Grow is a professional development program for early childhood educators in family day homes or child care programs that uses regular feedback from LENA technology to improve the talk environment in the classroom. Educators meet with a coach one-on-one or in group sessions, weekly or every other week, with the flexibility to fit their schedules. The coaching sessions provide time to review the data on the interactive talk environment in each classroom and use the data to set goals to talk more. Click here for a printable flyer. Please click here to learn more about the LENA Grow teacher certification program or here to view a list of LENA Grow certified teachers. To sign up for LENA Grow classes and teacher certification for your child care facility or family child care center, follow this link or paste it into your browser, then fill out the form: Ask about LENA Grow at the following locations! - ABC Childcare Center 3280 Park Ave 38111 - Around The Clock Learning Center 2995 Lamar Ave 38114 - Creative Home Academy FDH 1149 Seemes St 38111 - Gateway Learning Center 185 Norwood St, 38109 - Kidazzle CCC 3194 Independent Dr, 38118 - Kiddie Kollege 1980 E Person, 38114 - Kingdom Hearts 3515 Boxdale St, 38118 - Knowledge is Key 715 St. Paul St, 38126 - Little Scholars (Family Day Home) 3210 Cromwell Avenue 38118 - Nana's Nursery Family Day Home 1265 Effie Street, 38106 - TLC Learning Academy 4364 Millbranch Rd, 38116 (2022) - University of Little Scholars 3333 E Shelby Drive, 38118 - Yale Road Learning Center 4400 Yale Rd, 38128 Parents and Early Interventist scan below to join our community for updates. LENA Home- Alexzander Price, Coordinator LENA Home is ideal for coaching and home visitation programs that would like to supplement their curriculum with objective feedback from LENA technology. Home visitors using LENA technology to measure language gain objective insights on home talk that can be used to inform and motivate parents and track progress toward increasing quality interactions. Productivity is increased through clear reports that provide insights on how much adults spoke to their child, back and forth exchanges between parents and child, and TV and electronic sound throughout the day. The feedback is designed to show progress over time. You can also receive scheduling and text reminders that simplify logistics and ease the burden on home visitors, implementation support, one year of technical assistance, and two validated measures of child language behavior. LENA Home is best suited for families with a young child that has an IFSP or suspected disability. Click here for a printable flyer.
<urn:uuid:5c60e9c9-3441-4eec-80ed-33ecf998ea88>
CC-MAIN-2022-27
https://www.memphis.edu/ceed/lena/index.php
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00144.warc.gz
en
0.917333
1,318
2.75
3
2.682904
3
Strong reasoning
Education & Jobs
Differentiation is no longer about who can collect the most data. It’s about who can quickly make sense of the data they collect. There once was a time when hardware sampling rates, limited by the speed at which analog-to-digital conversion took place, physically restricted how much data was acquired. But today, hardware is no longer the limiting factor in acquisition applications. The management of acquired data is the challenge of the future. - Prism, Big Data, And Double Secret Probation - Interview: Big Data With LSI's Kimberly Leyenaar - Light=Speed SIlicon-Photonic Devices Conquer "Big Data" Challenges Advances in computing technology, including increasing microprocessor speed and hard-drive storage capacity, combined with decreasing costs for hardware and software have provoked an explosion of data coming in at a blistering pace. In measurement applications in particular, engineers and scientists can collect vast amounts of data every second of every day. For every second that the Large Hadron Collider at CERN runs an experiment, the instrument generates 40 terabytes of data. For every 30 minutes that a Boeing jet engine runs, the system creates 10 terabytes of operations information (Gantz, 2011). That’s “big data.” The big data phenomenon adds new challenges to data analysis, search, integration, reporting, and system maintenance that must be met to keep pace with the exponential growth of data. And the sources of data are many. However, among the most interesting to the engineer and scientist is data derived from the physical world. This is analog data that is captured and digitized. Thus, it can be called “Big Analog Data.” It is collected from measurements of vibration, RF signals, temperature, pressure, sound, image, light, magnetism, voltage, and so on. Challenges unique to Big Analog DataTM have provoked three technology trends in the widespread field of data acquisition. Contextual Data Mining* The physical characteristics of some real-world phenomena prevent information from being gleaned unless acquisition rates are high enough, which makes small data sets an impossibility. Even when the characteristics of the measured phenomena allow more information gathering, small data sets often limit the accuracy of conclusions and predictions in the first place. Consider a gold mine where only 20% of the gold is visible. The remaining 80% is in the dirt where you can’t see it. Mining is required to realize the full value of the contents of the mine. This leads to the term “digital dirt,” meaning digitized data can have concealed value. Hence, data analytics and data mining are required to achieve new insights that have never before been seen. Data mining is the practice of using the contextual information saved along with data to search through and pare down large data sets into more manageable, applicable volumes. By storing raw data alongside its original context, or “metadata,” it becomes easier to accumulate, locate, and later manipulate and understand. For example, examine a series of seemingly random integers: 5126838937. At first glance, it is impossible to make sense of this raw information. However, when given context like (512) 683-8937, the data is much easier to recognize and interpret as a phone number. Descriptive information about measurement data context provides the same benefits and can detail anything from sensor type, manufacturer, or calibration date for a given measurement channel to revision, designer, or model number for an overall component under test. In fact, the more context that is stored with raw data, the more effectively that data can be traced throughout the design life cycle, searched for or located, and correlated with other measurements in the future by dedicated data post-processing software. Intelligent DAQ Nodes Data acquisition applications are incredibly diverse. But across a wide variety of industries and applications, data is rarely acquired simply for the sake of acquiring it. Engineers and scientists invest critical resources into building advanced acquisition systems, but the raw data produced by those systems is not the end game. Instead, raw data is collected so it can be used as an input to analysis or processing algorithms that lead to the actual results system designers seek. For example, automotive crash tests can collect gigabytes of data in a few tenths of a second that represent speeds, temperatures, forces of impact, and acceleration. But one of the key pieces of pertinent knowledge that can be computed from this raw data is the Head Injury Criterion (HIC), a single scalar, calculated value representing the likelihood of a crash dummy to experience a head injury in the crash. Additionally, some applications—particularly in the environmental, structural, or machine condition monitoring spaces—avail themselves to periodic, slow acquisition rates that can be drastically increased in bursts when a noteworthy condition is detected. This technique keeps acquisition speeds low and minimizes logged data while allowing sampling rates that are adequate enough for high-speed waveforms when necessary in these applications. To incorporate tactics such as processing raw data into results or adjusting measurement details when certain criteria are met, you must integrate intelligence into the data-acquisition system. Though it’s common to stream test data to a host PC (the “intelligence”) over standard buses like USB and Ethernet, high-channel-count measurements with fast sampling rates can easily overload the communication bus. An alternative approach is to store data locally and transfer files for post-processing after a test is run, which increases the time it takes to realize valuable results. To overcome these challenges, the latest measurement systems integrate leading technology from ARM, Intel, and Xilinx to offer increased performance and processing capabilities as well as off-the-shelf storage components to provide high-throughput streaming to disk. With onboard processors, the intelligence of measurement systems has become more decentralized by having processing elements closer to the sensor and the measurement itself. Modern data acquisition hardware includes high-performance multicore processors that can run acquisition software and processing-intensive analysis algorithms in line with the measurements. These intelligent measurement systems can analyze and deliver results more quickly without waiting for large amounts of data to transfer, or without having to log it in the first place, which optimizes the system to use disk space more efficiently. The Rise Of Cloud Storage And Computing** The unification of DAQ hardware and onboard intelligence has enabled systems to be increasingly embedded or remote. In many industries, it has paved the way for entirely new applications. As a result, the Internet of Things is finally unfolding before our very eyes as the physical world is embedded with intelligence and humans now can collect data sets about virtually any environment around them. The ability to process and analyze these new data sets about the physical world will have profound effects across a massive array of industries. From health care to energy generation, from transportation to fitness equipment, and from building automation to insurance, the possibilities are virtually endless. In most of these industries, content (or the data collected) is not the problem. There are plenty of smart people collecting lots of useful data out there. To date this has mainly been an IT problem. The Internet of Things is generating massive amounts of data from remote, field-based equipment spread literally across the world and sometimes in the most remote and inhospitable environments. These distributed acquisition and analysis nodes (DAANs) embedded in other end products are effectively computer systems with software drivers and images that often connect to several computer networks in parallel. They form some of the most complex distributed systems and generate some of the largest data sets the world has ever seen. These systems need remote network-based systems management tools to automate the configurations, maintenance, and upgrades of the DAANs and a way to efficiently and cost-effectively process all of that data. Complicating matters is that if you reduce the traditional IT topology for most of the organizations collecting such data to a simple form, you find they are actually running two parallel networks of distributed systems: “the embedded network” that is connected to all of the field devices (DAANs) collecting the data and “the traditional IT network” where the most useful data analysis is implemented and distributed to users. More often than not, there is a massive fracture between these two parallel networks within organizations, and they are incapable of interoperating. This means that the data sets cannot get to the point(s) where they are most useful. Think of the power an oil and gas company could achieve by collecting real-time data on the amount of oil coming out of the ground and running through a pipeline in Alaska and then being able to get that data to the accounting department, the purchasing department, the logistics department, or the financial department—all located in Houston—within minutes or hours instead of days or months. The existence of parallel networks within organizations and the major investment made in them have been major inhibitors for the Internet of Things. However, today cloud storage, cloud computational power, and cloud-based “big data” tools have met these challenges. It is simple to use cloud storage and cloud computing resources to create a single aggregation point for data coming in from a large number of embedded devices (such as the DAANs) and provide access to that data from any group within the organization. This solves the problem of the two parallel embedded and IT networks that don’t interoperate. Placing near infinite storage and computing resources from the cloud that are used and billed on-demand at the fingertips of users provides solutions to the challenges of distributed system management and crunching huge data sets of acquired measurement data. Big data tool suites offered by cloud providers make it easy to ingest and make sense of these huge measurement data sets. To summarize, cloud technologies offer three broad benefits for distributed system management and data access: aggregation of data, access to data, and offloading of computationally heavy tasks. * Contribution by Dr. Tom Bradicich, R&D Fellow, National Instruments ** Contribution by Matt Wood, Senior Manager and Principal Data Scientist, Amazon Web Services Richard McDonell is the director of Americas technical marketing at National Instruments. He joined National Instruments with a BSEE in 1999 and led in the successful adoption of NI TestStand test management software and PXI modular instrumentation while serving as an industry leader in the test engineering community through many technical presentations, articles, and whitepapers. His specific technical focus areas include modular test software and hardware system design, parallel test strategy, and instrument control bus technology. He holds a bachelor’s degree in electrical engineering from Texas A&M University. Download this article in .PDF format This file type includes high resolution graphics and schematics when applicable.
<urn:uuid:7640319a-c273-47eb-b9db-a809a93a39d3>
CC-MAIN-2017-22
http://www.electronicdesign.com/analog/how-manage-big-data-analog-world
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00612.warc.gz
en
0.933928
2,181
2.734375
3
2.936703
3
Strong reasoning
Science & Tech.
What is Pressure Switch, How Do They Work? A pressure switch is a device designed to give an output in response to a set level of pressure. The switch makes electrical feedback on the rise or fall of pressure. The automated response of the switch is based on pressure. Its application ranges from industries to even residences and offices. For example, it is often used in well pump systems, electronic gas compressors, security alarms, and pressure panels in sliding doors commercially. A differential pressure switch is very similar to a regular pressure switch. However, in this case, the switch is activated when it senses a difference in pressure between two points. There are two main categories of pressure switch types — mechanical pressure switch and electronic pressure switch. Their working principle is the same, with a slight difference in their structures and the pressure sensing element used. The most commonly used sensing elements are bellows, diaphragms, and pistons. How Does it Work? Before analysing its working principle, let’s look into the structure of a primary pressure switch. Following are the components of a pressure switch include: (B) Operating pin (C) Range spring (D) Operating piston (E) Insulated trip button (F) Switch case (G) Trip setting nut (H) Inlet pressure A modern-day pressure switch primarily contains a pressure sensor and a switch contact. Whenever the pressure level is reached, the switch contact activates the electric circuit or controls it. A mechanical pressure switch along with the primary components uses a piston as the pressuring sensing element. However, the operating principle of this and the ones with bellow or diaphragm sensing elements remains the same. As seen, the components remain inside the switch case (F); the inlet pressure (H) moves against the operating piston (D); the resulting pressure moves the spring (C). The range of the spring can be adjusted to a set pressure that activates the switch. The operating pin (B) that is activated by the motion of the spring and piston, in turn, triggers the micro-switch (A). The micro-switch has two components — usually close contact (NC) and normally open contact (NO); when the pressure switch is triggered, the micro-switch enables the electric circuit, making the switch work. In the absence of pressure, the micro-switch’s electric contact remains NO; when the set pressure is reached, the micro-switch activates the NO electric connection, closing the circuit. Difference in the Working of a Mechanical and Electronic Pressure Switch: The physical mechanism of a mechanical pressure switch is activated by fluid pressure, for example, in water pumps or hydraulic systems, whereas electronic pressure switches work through electronic pressure sensors and an electronic circuit. In some of the switches adjusting pressure point is not possible as it is pre-set. Mechanical switches can work without any auxiliary power and are known to handle higher voltages better. On the other hand, electronic switches can be adjusted to change the delay time, output signal, deadband adjustability, turndown ratio etc. Selecting the Right Pressure Switch: For selecting the optimal pressure switch, the user needs to take into account the type of the process media, working pressure, temperature range, deadband or differential (the time between switch set and reset point), enclosure based on type of environments and finally, the type of the pressure switch depending on the application. For example for high-pressure application one should choose a piston design and for low pressure applications diaphragm operated switch is suitable. If you have any queries, Get in Touch with Us!
<urn:uuid:979bd90c-2962-4881-9bad-68bc3e7b96e6>
CC-MAIN-2022-05
https://www.precisionmass.com/what-is-pressure-switch-how-do-they-work/
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00674.warc.gz
en
0.904453
755
4.1875
4
2.077121
2
Moderate reasoning
Industrial
School reform's meager results As 56 million children return to the nation's 133,000 elementary and secondary schools, the promise of "reform" is again in the air. Education Secretary Arne Duncan has announced $4 billion in "Race to the Top" grants to states whose proposals demonstrate, according to Duncan, "a bold commitment to education reform" and "creativity and innovation [that are] breathtaking." What they really show is that few subjects inspire more intellectual dishonesty and political puffery than "school reform." Since the 1960s, waves of "reform" haven't produced meaningful achievement gains. The most reliable tests are given by the National Assessment of Educational Progress (NAEP). The reading and math tests, graded on a 0-500 scale, measure 9-year-olds, 13-year-olds and 17-year-olds. In 1971, the initial year for the reading test, the average score for 17-year-olds was 285; in 2008, the average score was 286. The math test started in 1973, when 17-year-olds averaged 304; in 2008, the average was 306. To be sure, some improvements have occurred in elementary schools. But what good are they if they're erased by high school? There has also been a modest narrowing in the high school achievement gaps among whites, blacks and Hispanics; unfortunately, the narrowing generally stopped in the late 1980s. (Average test scores have remained stable because, although the scores of blacks and Hispanics have risen slightly, the size of these minority groups also expanded. This means that their still-low scores exert a bigger drag on the average. The two factors offset each other.) Standard theories don't explain this meager progress. Too few teachers? Not really. From 1970 to 2008, the student population increased 8 percent and the number of teachers rose 61 percent. The student-teacher ratio has fallen sharply, from 27-to-1 in 1955 to 15-to-1 in 2007. Are teachers paid too little? Perhaps, but that's not obvious. In 2008, the average teacher earned $53,230; two full-time teachers married to each other and making average pay would belong in the richest 20 percent of households (2008 qualifying income: $100,240). Maybe more preschool would help. Yet, the share of 3- and 4-year-olds in preschool has rocketed from 11 percent in 1965 to 53 percent in 2008. "Reforms" have disappointed for two reasons. First, no one has yet discovered transformative changes in curriculum or pedagogy, especially for inner-city schools, that are (in business lingo) "scalable" -- easily transferable to other schools, where they would predictably produce achievement gains. Efforts in New York and the District to raise educational standards involve contentious and precarious school-by-school campaigns to purge "ineffective" teachers and principals. Charter schools might break this pattern, though there are grounds for skepticism. In 2009, the 4,700 charter schools enrolled about 3 percent of students and did not uniformly show achievement gains. The larger cause of failure is almost unmentionable: shrunken student motivation. Students, after all, have to do the work. If they aren't motivated, even capable teachers may fail. Motivation comes from many sources: curiosity and ambition; parental expectations; the desire to get into a "good" college; inspiring or intimidating teachers; peer pressure. The unstated assumption of much school "reform" is that if students aren't motivated, it's mainly the fault of schools and teachers. The reality is that, as high schools have become more inclusive (in 1950, 40 percent of 17-year-olds had dropped out, compared with about 25 percent today) and adolescent culture has strengthened, the authority of teachers and schools has eroded. That applies more to high schools than to elementary schools, helping explain why early achievement gains evaporate. Motivation is weak because more students (of all races and economic classes, let it be added) don't like school, don't work hard and don't do well. In a 2008 survey of public high school teachers, 21 percent judged student absenteeism a serious problem; 29 percent cited "student apathy." The goal of expanding "access" -- giving more students more years of schooling -- tends to lower educational standards. Michael Kirst, an emeritus education professor at Stanford, estimates that 60 percent of incoming community college students and 30 percent of freshmen at four-year colleges need remedial reading and math courses. Against these realities, school "reform" rhetoric is blissfully evasive. It is often an exercise in extravagant expectations. Even if George W. Bush's No Child Left Behind program had been phenomenally successful (it wasn't), many thousands of children would have been left behind. Now Duncan routinely urges "a great teacher" in every classroom. That would be about 3.7 million "great" teachers -- a feat akin to having every college football team composed of all-Americans. With this sort of intellectual rigor, what school "reform" promises is more disillusion.
<urn:uuid:69a765a7-1682-4fa7-875d-6b3e90b32740>
CC-MAIN-2015-35
http://www.washingtonpost.com/wp-dyn/content/article/2010/09/05/AR2010090502817.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059455.0/warc/CC-MAIN-20150827025419-00239-ip-10-171-96-226.ec2.internal.warc.gz
en
0.970955
1,047
2.546875
3
2.981074
3
Strong reasoning
Education & Jobs
|The rare salp, Helicosalpa, has now been recorded north to Norway. They are so rare that even researchers have only seen a few of them alive. We have now published a scientific article on the topic. Only three species is known: Helicosalpa virgula, H. komaii and H. younti. All three have been observed in the Pacific Ocean, but in the Atlantic Ocean only H. virgula has been observed. We are looking for tissue samples, through citizen science: 1) If you see one, please take a picture of it. 2) Cut out a tissue sample (golf ball size), and put it in a clean plastic bag in your home freezer. 3) Contact us. Help us in getting tissue samples from the huge gelatinous spheres in the ocean, especially from the Mediterranean Sea ! Read this comics, and learn more about them ! This is squid eggs, possibly from the 10-armed genus Loligo. About 40 egg sacs are attached to the tip of a kelp blade (hanging from a pier). These embryos are around 9 days old, and will hatch when they are about 21 days old. The red pair of eyes are clearly visible through the egg the embryo grows, the yolk sack (on the picture in the middle) will be reduced and armes formed. To the left of the eyes, the fins will develop. (Photo credit: HR/ Sea Snack Norway.) Maja cf. brachydactyla, or common spider crab, described by H. Balss in 1922 has 5 pairs of visible legs, and belongs within the group of "true crabs" (Brachyura). This picture is taken from Crozon in France (photo credit: HR/ Sea Snack Norway). (Linnaeus, 1758) (In Norw.: trollkrabbe, in English: northern stone crab) resembles the crab to the left, but has only 4 pairs of visible legs (the 5.th pair is very small, and hidden). It therefore belongs within the "crab-like" species (Anomura). This picture is from Norway (photo credit: HR/ Sea Snack Why do they have almost the same name - both including "maja"? It was actually a dispute between several researchers around 1700- 1800, how to use this name. The ongoing dilemma was submitted to the International Commission for Zoological Nomenclature (ICZN) in 1958, which had to solve the case. ICZN acts as adviser for the zoological community, and make rules for the naming of animals. Museum of Monaco is a museum of marine sciences, and was inaugurated in 1910 by Prince Albert 1. The Prince was very interested in oceanography and made several cruises in the Mediterranean Sea but also to e.g. the Azores and Svalbard. The museum is built on a steep cliff, and just the architecture is worth a trip to the Principality of Monaco, the second-smallest country in the world. The famous Jacques-Yves Cousteau, also a diving pioneer, was a curator at the museum for many years (photo credit: HR/ Sea Snack Norway). rare, huge gelatinous spheres have been recorded from the North East Atlantic Ocean, and are attributed to squid egg mass. They are about 1 meter in diameter, and many of them have a dark streak through the center. We are investigating these spheres, and have received around 50 observations, mostly from divers. The first observation is from Croatia in 1999. The first observation from the Norwegian coast is from 2001. Recently tissue samples of four speres from Norwegian waters were DNA tested, showing they are made by Illex coindetii, or "southern shortfin squid" (In Norw., sørlig kortfinnet 10-armet blekksprut). The species was described by Vérany in 1839, so it took 180 years before their egg mass was identified. If you see such a sphere in the ocean, especially from the Mediterranean Sea, we would very much like a tissue sample. Could you please cut out a small tissue sample of the sphere wall, put it in a clean plastic bag in the freezer - before contacting us! Also, if you are able to take a picture or video of the sphere, that would be great! are used to high and low tide twice daily, but that is not the case everywhere on the planet. Mont-Saint-Michel is an Island situated in a large bay in Normandy in France, and the Island with its monastry, picturesque from every possible angle (!), is one of the most visited tourist attractions in France. The large Mont-Saint-Michel Bay is part of the the English Channel, known for strong tidal currents, and the castle lies at the innermost part of the bay. Here, the tide only comes in every second week (at spring tides, not neap tides), and is then visible from the castle! This is not all, because since the Island is surrounded by mud flats, the tidal currents, or high tides, are only strong enough to surround the whole Island about 10 days a year! The phenomenon is called "Les Grandes Marées". It creates a tidal is even more visible as it enters the long, narrow inlet at the mouth of the river close to the monastry. |Legend has it that the tide comes in "as fast as galloping horses", but that is an overstatement. The tide comes in quite fast, but not that fast. Walks around the castle at low tide is possible, BUT always walk with a certified guide. There is both quicksand and tidal currents to look out for! and salt marshes, also the once of Mont-Saint-Michel Bay, play an important nursery role for many fish species, and at least 100 fish species are known in these intertidal areas. They bay also hosts many birds and seals (Photo credit: HR/ Sea Snack Norway.) may have found bristle worms crawling around on the sea floor (børstemark in Norwegian, and polychaetes in Latin), but some species also swim. This is Tomopteris sp., and is about 4,5 cm long with long antennae. It spends it's entire life as plankton, free swimming in the water mass. It can swim very fast, both forwards and backwards. If disturbed, it may play dead for a while, hoping the predator or danger will go away. Tomopteris is know for bioluminescens, which means that it can make it's own light to e.g. scare predatores. Tomopteris emits blue light, but e.g. T. helgolandica also emits yellow light which is more rare in the ocean. (Photo credit: HR/ Sea Snack Norway.) Cnidaria phylum contains all the jelly like organisms, of which many that stings. There are several classes, and one of them is represented here - Hydrozoa ("småmaneter" in The two upper pictures show two common hydrozoans in Norwegian waters: upper left, Sarsia tubulosa described by M. Sars in 1835 and Tima bairdii (Johnston, 1833). Notice the small crustacean, or amphipod, attached to the umbrella. It is a parasitic hyperiid. two lower species are also hydrozoans, but belong to a special group or order, the siphonophores. Siphonophores may look like one animal, but is in fact a colony of small individual animals, each with special tasks such as attack or defence. They pray upon e.g. small crustaceans, fish and larvae. The most famous siphonophore is perhaps the Blue bottle, or Portuguese siphonophores moves quite quickly through the water mass, and may be difficult to photograph. Two common species for Norwegian waters are: lower left, the long "thread", cf. Nanomia cara described by Agassiz in 1865 and (right) cf. Physophora hydrostatica described by Forsskål in 1775. (Photo credit: HR/ Sea Snack Norway.) is squid eggs (In Norwegian: blekksprutegg), washed ashore after a storm in France. They look like eggs from a 10-armed species, and might be from Loligo vulgaris. L. vulgaris spend the winter months in deep waters outside Portugal, swims past France to the North Sea in the spring/ summer - and back again. are all cuttlebones (In Norwegian: blekksprutskall), the internal shell of 10-armed cuttlefish. When washed ashore they are collected and actually used in caged birds giving extra (calcium) and possibilities for smoothening down their growing beak. (Photo credit: HR/ Sea Snack Norway.) you ever encountered loads of foam at the beach? These pictures show the same bay, only a few weeks apart (in the middle of the day, and at sunset). Salt water contains dissolved salts, proteins and fat. If you add dead algae and strong wind and waves, thick foam usually forms ashore. Sea foam is usually harmless, and only indicates a productive ocean ecosystem. However, occations with unpleasent outcome have been reported. Large of dead sea birds were found in California, and soap-like foam on their feathers from decaying algae made it difficult for them to fly, also causing hypothermia and death. If the foam contains certain decaying toxic algae, the foam may cause skin irritations and respiratory discomfort to humans. (Photo credit: HR/ Sea Snack Norway.) Sea hare (In Norwegian: sjøhare), Aplysia punctata. Color of sea hares may vary depending on what type of food/ algae they eat. (Photo: HR/ Sea Snack Norway). Sea hare, Aplysia punctata. They usually occur single but large amounts may be observed when mating. This species also occurs in Norwegian waters. (Photo: HR/ Sea Snack Norway). hares (In Norwegian: sjøhare) are funny looking creatures, and in frontal view this dead specimen may look like a tiny hippopotamus. It belongs within the Mollusca phylum. This is Aplysia depilans, a specimen collected at a beach in France. It is common in French waters, but has not been recorded from Norway. (Photo: HR/ Sea Snack Norway.) | a National Park like the Wadden sea (In Norwegian: vadehavet) requires caution and respect for nature. The Wadden sea southern Denmark, via Germany, to the Netherlands. Join a guided tour and learn more about this fascinating flat, muddy, but animal rich area! (Photos: HR/ Sea Snack Norway.) In a National Park - "Take nothing but pictures. Leave nothing but footprints. Kill nothing but time". A motto of the Baltimore Grotto. you ever seen such "spiky balls" in the ocean (two left pictures)? These "balls" are holdfasts (In Norw.: festeorganet) furbelows, a brown algae. In Norwegian the kelp is called "draugtare". Legend has it that "draugen" are whicked ghosts dead fishermen who died at sea, doomed to haunt waves. Furbelows is not too common in Norway, and likes it rough, at wave exposed coasts. For other species (right picture) the kelp holdfast may give good shelter for all kinds of small creatures, and is a good place to look for animals! (Photos: HR/ Sea Snack Norway.) Diving gives you unique opportunitites in discovering interesting phenomena below the water surface. Here are some huge whale knockles from SW Norway, Bergen, possibly seen by humans for the first time. (Photos: HR/ Sea Snack Norway.) you ever studied a starfish (In Norwegian: sjøstjerne) under the microscope? Try it, and you'll discover some fascinating structures. Here are three different shaped spines from three species. The spines are very small and fragile, all measuring under 0,5 mm in length. (Photo: HR/ Sea Snack Norway). This fish is called Shanny, or Blenny (In Norwegian: tangkvabbe). In latin it is called Lipophrys pholis, and was named by Carl von Linné in 1758. This species is found in shallow waters of rocky coasts, and may also remain out of water, breathing air. One of it's favourite food is barnacles. After spawning in the spring, the male may guard eggs from several females. They are not too long, but may reach around 20 cm in length. This species is actually on the Red List (list of threatened species) due to it's homebound behaviour. Also, due to it's inability to move away, it can be used as an indicator species for pollution monitoring. This specimen was found at Sotra/ Bergen, on the SW coast of Norway. (Photo: HR/ Sea Snack Norway.) 1000 species of sea mites (In Norwegian: sjømidd) have been described worldwide. They are also called Halacaridae in latin, and are very small creatures. We're talking about less than 0,5 mm long (!). They are so small that on a head of a pin you may place 15-20 mites! Look at the claws which are "hook-shaped". Some feed on algae, others are predators or parasites. (Photo: HR/ Sea Snack Norway.) is a mite you also may encounter in the ocean, in the littoral zone, were it is build for clinging to rocks and algae even if rough weather. Look at the claws. It belongs to a group called Hyadesia. About 40 species of Hyadesia and Amhyadesia have been described worldwide. (Photo: HR/ Sea Snack Norway.) | Photo shoot of a juvenile garfish (6,5 cm long).| Have you ever seen a juvenile garfish (in Norw.: horngjel)? I don't think too many people have! As you can see on the closeup picture, the upper beak (or jaw) is still much shorter than the lower beak but it will grow longer as the fish matures. Adults can be seen in large schools in the open ocean but move into shallow water during springtime, when spawning. (Photos: HR/ Sea Snack Norway.) This live specimen was only 6,5 cm long, but adults become about 1 m long. In latin they are called Belone belone, described by Linnaeus in 1761. In several countries it is a popular seafood with green bones. This may look like a plant, but it is a sponge (In Norw.: svamp) - the simplest of multi-celled animals. It can filtrate seawater for algae, bacteria, and even small crustaceans. All sponges may be placed in only four classes, of which mostly saltwater species. Most sponges are actually both females and males. Sponges are found to accumulate specific metals, so they might become promising biomonitors of metal contamination. Metals may be hazardous to humans since they are accumulated in aquatic animals, transported through the food web, and posing a risk to us through sea food consumption. Identifying sponges is difficult. The sponge on the picture is probably Polymastia boletiformis, which was described by Lamarck in 1815. (Photo: HR/ Sea Snack Norway.). juvenile flatfish (In Norw.: en juvenil flatfisk fra "varfamilien") caught at the surface in June. It is only 25 mm long, and around that time they settle on the bottom. Both eyes are tilted to the left side, and this could possibly be a juvenile of turbot, brill or topknot (piggvar, slettvar eller hårvar). (Photo: HR/ Sea Snack Norway.). | Common Ctenophora from Norwegian waters (In Norw.: ribbemaneter). (From left to right) Pleurobrachia pileus, cf. Beroe cucumis, Beroe gracilis, Bolinopsis infundibulum, and Mnemiopsis leidyi. The ctenophore is invasive to Norwegian waters, and was discovered here for the first time in 2005. It is a zooplankton predator, and can eat as much as it's own weight 15 times each day. The ctenophors are to photograph. They are fragile, look like a "sack of water", and move all the time. (Photos: HR/ Sea Snack Norway.) (in Norw. rognkjeks) enter shallow waters in late winter/ spring to spawn. The males watches the eggs for 6-10 weeks before hatching. Usually the females spawn subtidally but sometimes just above low water spring tide level. Males then have to spout water from their mouth over egg masses exposed to air by low tides! juvenile lumpsucker from shallow waters (approx. 1,5 cm long). Not too much is known about lumpsucker biology but when reaching about 1 year they appear to be moving into deeper water. (Photos: HR/ Sea Snack Norway.) biology of the monkfish (in Norw. breiflabb) is not too well known. During summer it can be found in shallow waters, | wintertime and when spawning, it can be found below 2000 m depth. Picture 1 (left) shows the sharp teeth of a juvenile monkfish (20 cm long), 2) the otholites, and 3) a specimen hunting for food with it's own fishing rod on it's (one of the dorsal fins). In latin, monkfish are called Lophius piscatorius after Carl von Linné who described and named the fish in 1758. (Photos: HR/ Sea Snack Norway). The Aquarium "Sea Life" in Helsinki is great fun showing many kinds of living sea animals. On the wall you may also encounter the extinct "Helicoprion", a shark-like fish with a "tooth-whirl" in the lower jaw. It looks a bit like a circular saw. Helicoprion lived around 280 million years ago. (Photos: HR/ Sea Snack Norway.) Australian Great Barrier Reef animals (left to right): sea feather, turtle, sea cucumber (Thelenota ananas Jaeger, 1883) and brain coral. (Photos: HR/ Sea Snack Norway.) A boiled great scallop (Pecten maximus (Linnaeus, 1758)) (In Norw.: kamskjell) shows the eatable parts which are the muscle and the gonad. The male part of the gonad is white, and the female orange. (Photo: HR/ Sea Snack Norway). A spider crab (Hyas sp.) (in Norw.: pyntekrabbe) is using algae as camouflage. Some of these species are thought to have antibiotic properties. (Photo: HR/ Sea Snack Norway.) Ghost fishing (In Norw.: spøkelsesfiske). Fishing gear that is lost or abandoned are killing thousands of fish each year. Surveys and clean-up programmes should be undertaken in order to establish how widespread this problem really is! (Photo: HR/ Sea Snack Norway.) The hermit crab (In Norw.: eremittkreps) Pagurus prideuax was described by Leach in 1815. It is living in symbiosis with the sea anemone (Fabricius, 1779), and carries it on it's back. The sea anemone (which is the "dotted layer" on the crab's shell) feeds on leftovers from and the crab is protected from other animals by the anemones sticky tentackles. (Photo: HR/ Sea Snack Norway.) The bush-shaped nudibranch, Dendronotus frondosus (Ascanius, 1774) photographed in the Bergen area (In Norw.: busksnegl, en type nakensnegl). It can reach a size up to about 10 cm in length. (Photo: HR/ Sea Snack Norway.) Another, but smaller nudibranch species (Onchidoris sp.), reaches only about 2 cm in length. (Photo: HR/ Sea Snack Norway.) Dead man's fingers... It's a soft coral called Alcyonium digitatum and was described by Linnaeus in 1758. (In Norwegian: dødningehånd, en bløtkorall.) Edible crab (Cancer pagurus Linnaeus, 1758) (in Norw.: taskekrabbe) "hiding" in the sand. Seaweed, including kelp, from Norway (from left to right): oarweed, thongweed, Devil's apron, and green ribbon/ green nori Tang og tare fra norskekysten (fra venstre til høyre): fingertare, remtang, sukkertare og tarmgrønske. Latin (from left to right): Laminaria digitata, Himanthalia elongata, Laminaria saccharina, and Enteromorpha sp. Huge amounts of sea squirts (Ciona intestinalis) (In Norw.: sjøpung) is common several places along the Norwegian coast. Research now looks into farming possibilities. The starfish Asterias rubens eats "almost anything", and here it's on top of a sea squirt (Ciona intestinalis). The Havrå farm (Havråtunet) at Osterøy (near Bergen), a heritage site, listed in litterature as far back as year 1303. Archaeological discoveries settlement as far back as the Stone Age. Masters (NM) in sailing was arranged in Bergen in August 2015. The boat type, "Oselver, spritsail", resembles old boats constructed in year 300, and has been typical in the South-Western Norway for several hundred years. The boats are between 5-10 m long, and are built of pine- or oak European flounder (In Norw.: skrubbe) (Platichthys flesus Linnaeus, 1758) was observed during diving at just a few meters depth. The starfish Luidia ciliaris (Philippi, 1837) observed in the Bergen area. In Norwegian it's called the "seven-armed starfish". (from top left to right): Amphilochus manudens Bate, 1862, Boroecia borealis (Sars, 1866), Laetmatophilus 1899, Platysympus typicus (Sars, 1870), Gastropoda juv indet, Themisto which usually have five arms can sometimes be deformed; 1805) from Iceland (Bioice-material). For additional image please click on image. trip on the Nærøyfjord, Gudvangen, Norway. |Water. The element that won’t majestically across two thirds of our world. the surface in a swirling pattern of lakes, rivers, seas, and oceans. moving and mysterious. vast fluid theatre challenging the adventurer within us. yours to accept. Don’t disappoint it. it your respect and enjoy the experience. you out there ! By Dan Trotter Common Dog Whelk, Nucella lapillus (Linnaeus, littoral zone (in Norwegian: purpursnegl). Eggs from Nucella lapillus. spiral by a nudibranch (possibly bilammelata (Linnaeus, 1767)). Seaweed pipefish (Syngnathus sp.) (In Norwegian: nålefisk) in the littoral zone, observed while catching small Crustacea. to the Lysebotn fjord and "Kjeragbolten", a boulder (glacial deposit) suspended above a 1000 m abyss. |For more pictures from Norway ( Norge, Norwegen, Noorwegen, Norvège, 挪威 ) please visit www.visitnorway.com.
<urn:uuid:cf93c982-ea95-4eb2-960f-a1076f0b1321>
CC-MAIN-2021-21
http://buzzingkid.no/index_picsgallery.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00581.warc.gz
en
0.910229
5,468
3.328125
3
2.31588
2
Moderate reasoning
Science & Tech.
The curious name Leneve first appeared in the U.S. baby name data 105 years ago…then disappeared again the very next year. - 1912: unlisted - 1911: unlisted - 1910: 7 baby girls named Leneve - 1909: unlisted - 1908: unlisted A similar spike can be seen in the Social Security Death Index (SSDI) data: - 1912: none - 1911: 6 people named Leneve - 1910: 16 people named Leneve - 1909: none - 1908: none Where did the name Leneve come from all of a sudden in 1910? We’ll get to that in a second. First, let’s start with the murder. On July 13, 1910, the remains of a body thought to belong to music hall singer Belle Elmore (legal name Cora Crippen) were found in the basement of her home in London. Belle had been missing since February. The main suspect was her husband, Hawley Crippen, a homeopathic doctor who had fled to Belgium several days earlier with his young lover, Ethel Le Neve. A warrant for the arrest of Crippen and Le Neve was issued on July 16. The pair — disguised as father and son, and using the surname Robinson — boarded a Canada-bound steamship in Antwerp on July 20. The captain of the ship was suspicious of the pair, so he telegraphed the boat’s owners, who in turn telegraphed Scotland Yard. A London police officer boarded an even faster steamship headed for Canada on July 23. Fascinatingly, Crippen and Le Neve were not only unaware that they were being trailed by the London police on another boat, but they also didn’t know that newspapers around the world had picked up their story and that millions of people were reading about the dramatic transatlantic race, day by day, as it occurred. The faster ship reached Quebec first, and the officer was able to intercept and arrest the fugitives on July 31. (This makes Crippen and Le Neve the first criminals to be apprehended with the assistance of wireless communication.) The next month, the pair sailed back to England. They were tried separately. Crippen was found guilty. He was executed by hanging on November 23. Ethel Le Neve was acquitted. She promptly left for New York. To this day, no one knows exactly whose remains were in that basement in London, how they got there, and who was to blame for it all. But we do know that Ethel Le Neve (often written “Leneve” in U.S. newspapers) was a fixture in the news in mid-1910. This is no doubt what boosted the rare name Leneve onto the baby name charts for the first and only time. Leneve was the top one-hit wonder name of 1910, in fact. Ethel was back in London by 1915. She eventually got married and had two children. She died in 1967, never having revealed to her children that she was once a world-famous runaway. (They found out in the 1980s, after being contacted by a crime historian.) What are your thoughts on the baby name Leneve? One thought on “Where did the baby name Leneve come from in 1910?” I would have liked it a bit (for its sound, look and modernistic touches) had I not heard about the lover of a fugitive, but that’s just me (lol).
<urn:uuid:6c588e94-d2f7-4387-b878-8be6bf14e063>
CC-MAIN-2023-23
https://www.nancy.cc/2015/07/08/baby-name-leneve/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646076.50/warc/CC-MAIN-20230530163210-20230530193210-00324.warc.gz
en
0.985008
764
2.703125
3
2.630852
3
Strong reasoning
History
From the fall of 1943 to the spring of 1945, the Castle at Nikolsburg was transformed into a depot of works of art and objets d’art stolen by the Einsatzstab Reichsleiter Rosenberg (ERR) mostly in France, and to a lesser extent in Belgium, and Holland. At least 5 trains filled with loot packed into hundreds of crates made their way from Paris to Nikolsburg where they were dutifully unloaded and placed in dozens of rooms throughout the Castle. As the Western Allies advanced across France, Belgium and Holland, many of the crates were transferred to Altaussee in the Salzkammergut section of Austria where the Reich authorities had created a central underground facility consisting of a network of salt mine galleries in which to store plundered art from across Europe. Not all the crates from Nikolsburg, however, made it to Altaussee. An unknown number remained at the Castle. In the final days of the Second World War, a fierce battle raged in and around Nikolsburg opposing retreating German forces and advancing Red Army units. The town was not spared and the Castle took massive artillery hits. As Soviet troops closed in on the town, the occupants of the Castle removed many of the remaining objects to safer locations across town, including the local museum. A major fire produced by systematic shelling gutted the Castle. To this day, it is not clear how much of it burned down. French restitution authorities including Rose Valland concluded that the Castle had burned to a crisp and its contents turned to ash. Curiously enough, however, two years after this hasty verdict was pronounced, the Czech government returned to France several hundred items from Nikolsburg/Mikulov which bore the identifying numbers assigned to them by the ERR in occupied Paris, at the Jeu de Paume, where they had been brought and sorted. Some of these items belonged to Veil Picard (WP), David David-Weill (DW), Louis Louis-Dreyfus (DRF, DRD), the Hirsch family (HIR), the Oppenheimers (OPPE) and many others, including objects seized during Möbel-Aktion (MA-B). Until a full accounting is produced of the items stored at Nikolsburg, a doubt will always linger whether more objects from the Nikolsburg hoard remain in the Czech Republic or in Slovakia or even perhaps in Austria. No one knows for sure.
<urn:uuid:2df4b3f0-68ff-4c71-958e-e954457a4405>
CC-MAIN-2018-05
http://plundered-art.blogspot.com/2011/04/fate-of-nikolsburg-hoard.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890394.46/warc/CC-MAIN-20180121080507-20180121100507-00499.warc.gz
en
0.968347
505
2.890625
3
2.931909
3
Strong reasoning
History
Those lucky enough to have access to old photos of their family members have a piece of the past through these images. These photos provide a glimpse into the lives their ancestors led. But, with so much temporal distance between past and present, it can be hard to identify people and the years the photos were taken. MyCanvas has created this blog series to help those trying to identify old photographs! There’s one simple clue in every photo that can narrow down your search: clothing! Clothes give many clues about class and status, trends that point to specific years, and even a general range to help you find who you’re looking for. Early Photography: 1839-1860 Our first stop is the early decades of photography. Daguerreotypes–the first modern photos–appeared in public in 1839. However, they faded out of the limelight by 1860. So if you’re lucky enough to have one of these treasures, you’ll be looking for ancestors within this 20 year time frame. Women’s clothing can be more telling than men’s throughout the 19th century. But there are a few subtle clues. Men’s hairstyle and choice of neckties can narrow down your search. Here are the styles and trends for each decade. Those of you familiar with pioneer history and westward expansion should notice some familiar styles in this decade! Dresses used pleated bodices, low sloping “natural” shoulders, and bell-shaped skirts which widened through the decade. Look for accessories like crochet shawls and frontier-style bonnets. These bonnets were plainer than previous decades, and tied under the chin. Married women often wore linen caps, adorned with lace and ribbons, and bonnets went over them. Evening dresses came off the shoulder, with elbow-length flounces. Ladies would wear these with sheer shawls and opera gloves. Women’s outerwear also came back into fashion with narrower sleeves. Jackets and coats were cape-like, especially around the collar. Fur muffs for the hands became fashionable to have in winter. Women parted their hair down the center and often wore it in a knot or a bun at the back of the head. They often wore ringlets (called “spaniel curls” on either side of their hair, which were often irrelevant to the rest of the style. Alternately, hair on the sides of the head might be braided or smoothed and looped over the ears, with the excess tucked into the bun or knot. The upper classes and those attending formal events wore tall collars and neckties, which tapered in at the waist with a rounded chest to give them an hourglass-style figure. Top hats became taller and straighter, beginning to take the shape of the later stovepipe. Denim pants also appeared during this decade. While it’s not likely that early jeans would have been worn in a formal photo, they might appear in photos of everyday American life, or in photos of the working class. Styles mirrored their parents’. Young boys may have worn long tunics up through age 6, and toddlers of all genders wore cotton dresses with long sleeves. Girls wore very similar styles to their mothers. The 1850s see the introduction of the ambrotype, a new method of photography that was less expensive than daguerreotypes. James Ambrose Cutting patented them in 1854. These photos were popular between the early 1850s and the mid-1860s. If your photo is an ambrotype, narrow your search down to these years. Women wore very wide skirts full of flounces, accented by crinolines or hoop skirts. Women’s fashion tended to come from Paris, while London held the strongest influence over men’s clothing. Bell-shaped sleeves, appearance of bodices that buttoned in front. Off-the-shoulder evening dresses had shorter sleeves. Outer garments, if they appear in photos, were cape-like jackets worn over their dresses. These too had multiple flounces. Bonnets, still popular in these years, gained heavy lace trim. You may also see bloomer dresses, which may point to the photo being circa 1851. This isn’t to say women ran around in their underwear! Bloomer dresses were a healthier, more comfortable alternative to the restrictive and sometimes dangerous corsets of the time. The dress consisted of loose trousers and a short skirt worn over them, inspired by Turkish pantaloons. If your ancestor is wearing these, it’s likely she was campaigning for women’s rights. They were also very popular among women living in the West due to their flexibility in frontier work and travel. Hair, on the other hand, was simpler. Women wore their hair parted down the middle and in a bun or knot at the back of their heads. The sides had volume to cover the ears, whether in a puff or with ringlets. Men wore tall, highly-starched collars and cravats. You may see suits where the coat, waistcoat, and trousers were all of the same fabric, which was a trend in this decade. Top hats became more extreme, leaning toward the “stovepipe” fashion we see from Abraham Lincoln. Bowler hats appear in 1850, but only the working classes wore them. Facial hair began to be very popular, with a wide variety of styles. They might be any combination of a mustache, beard, and sideburns. Men’s hair had a high part on one side, smoothed down with a bit of volume around the ears. Much like the previous decade, children dressed like their parents. Look for young boys (under 6-8) in belted tunics with longer hair. Boys’ suits may have had a wide, rounded, frilled collar distinguishing them from their fathers. Girls’ skirts were shorter, possibly knee-length, with pantalettes beneath. Stay tuned to learn more about identifying photos by clothing! Next week, we’ll be exploring the 1860s-1870s: the American Civil War and Reconstruction.
<urn:uuid:ea29cee3-5d7d-43e5-a745-e7b746757dd2>
CC-MAIN-2019-39
http://mycanvasblog.com/identifying-old-photos-clothing-1840s-1850s/
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575860.37/warc/CC-MAIN-20190923022706-20190923044706-00454.warc.gz
en
0.972909
1,285
2.6875
3
2.002177
2
Moderate reasoning
Fashion & Beauty
Pronouns and Case: Why Can't a Pronoun Be More Like a Noun? Why Can't a Pronoun Be More Like a Noun? Can't live with 'em, can't live without 'em. Between you and I, pronouns drive myself crazy, and I bet they do yourself, too. A quick look at the disastrous last sentence and a brief survey of English explains why pronouns are more maddening than a hormone-crazed teenager. Old English, like Latin, depended on word endings to express grammatical relationships. These endings are called inflections. For example, consider the Old English word for stone, “stan.” Study this chart. |Nominative and accusative singular||stan| |Nominative and accusative plural||stanas| There are only three contexts in which myself should be used: as a reflexive pronoun (“I fed myself”), intensifier (“I myself would never leave early”), and in idioms (“I did it all by myself”). You Could Look It Up Case is the form of a noun or pronoun that shows how it is used in a sentence. Case is the grammatical role a noun or pronoun plays in a sentence. English has three cases: nominative, objective, and possessive. Fortunately, contemporary English is greatly simplified from Old English. (Would I lie/lay to you?) Today, nouns remain the same in the nominative and accusative cases and inflect only for the possessive and the plural. Here's how our version of “stan” (stone) looks today: stone, stone's, stones, and stones'. Huh? Sounds like Greek? Not to worry. It will all be clear by the end of this section. Pronouns, on the other hand, have retained more of their inflections, and more's the pity. The first-person pronoun, for example, can exist as I, me, mine, my, myself, we, us, our, ours, ourself, and ourselves—11 written forms! Because pronouns assume so many more forms than nouns, these otherwise adorable words can be a real pain in the butt. Head Case: The Three Cases Case is the form of a noun or pronoun that shows how it is used in a sentence. English has three cases: nominative, objective, and possessive. The following chart shows the three cases. |Nominative (Pronoun as Subject)||Objective (Pronoun Showing Object)||Possessive (Pronoun as Ownership)| Excerpted from The Complete Idiot's Guide to Grammar and Style © 2003 by Laurie E. Rozakis, Ph.D.. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc.
<urn:uuid:e2b5e57c-a34d-4e49-b3bb-b50e440c9787>
CC-MAIN-2018-47
https://www.infoplease.com/language-arts/grammar-and-spelling/pronouns-and-case-why-cant-pronoun-be-more-noun
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742793.19/warc/CC-MAIN-20181115161834-20181115183834-00090.warc.gz
en
0.913884
620
3.03125
3
1.979682
2
Moderate reasoning
Literature
You often hear people refer to life as being more about the journey than the destination. However, for oil and gas companies looking to store the water they use in their operations, the destination is important too. The Alberta Energy Regulator (AER) released updated requirements for centralized fluid storage (CFS) to encourage water reuse in hydraulic fracturing operations. “These changes will reduce industry’s reliance on Alberta’s rivers and lakes and reduce truck traffic,” says Joelle Mac Donald, a senior civil engineer with the AER’s technical science & external innovation branch. "The CFS updates provide both outcome-based and prescriptive requirements that give companies the flexibility to figure out the best solutions for their specific circumstance for storing water for reuse. CFS facilities are an important way to encourage water reuse, particularly with hydraulic fracturing operations. Process water can be stored in engineered containment ponds and above-ground synthetically lined walled storage systems, both of which have a smaller footprint than freshwater reservoirs. To reduce the reliance on high-quality non-saline water, such as freshwater from rivers and lakes, hydraulic fracturing operators can choose to reuse produced water and water-based flow back, rather than sending it for disposal. To reuse this process water, there needs to be a safe way to store it until it’s needed; the AER’s requirements ensure these waters will be securely stored to protect the environment, wildlife, and the public. As with any AER requirement, the goal is to protect the environment and keep the public safe. Mac Donald explains that for CFS, it means ensuring that the storage facility is appropriately located, the right controls are in place, and that the facility is carefully monitored. “Ultimately, we want to reduce the likelihood of a produced water release; but if something should happen, we want to minimize the consequences,” says Mac Donald. “We require a lot of mitigation and monitoring for these risks, such as leakage collection and detection systems that provide an early indication of potential problems like pin-hole leaks or damage to the barriers.” CFS facilities also include a secondary containment system, a proactive groundwater monitoring program, and wildlife and waterfowl controls, such as fencing and bird netting. Mac Donald added that CFS facilities fall under the Oilfield Waste Liability Program (OWL). This program helps protect Albertans and the Orphan Well Association from the costs to abandon, remediate, and reclaim oilfield waste management facilities. To learn more about CFS, check out the project video. - Directive 055: Storage Requirements for the Upstream Petroleum Industry - Directive 058: Oilfield Waste Management Requirements for the Upstream Petroleum Industry Leaving a comment? You should know this:
<urn:uuid:55e978f2-c409-4818-9b93-0b257a9ab4d6>
CC-MAIN-2022-21
https://resource.aer.ca/index.php/stories/when-containment-matters
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00756.warc.gz
en
0.93935
677
2.703125
3
2.794893
3
Strong reasoning
Industrial
Farmer and landowner information What is a living snow fence? Living snow fences are trees, shrubs, native grasses and wildflowers located along roads or around communities and farmsteads. These living barriers trap snow as it blows across fields, piling it up before it reaches a road, waterway, farmstead or community. It also includes leaving a few rows of corn along the road side, hay bales and other ways to use vegetation and temporary fencing to control blowing snow. How does it work? Drift-free roads are achievable through proper road design and snow fences. A suitably designed roadway will promote snow deposits in ditches rather than on the roadway. Blowing snow that does reach the road will move across without drifting. Snow fences can also help maintain clear roadways by capturing blowing snow upwind of a problem area and storing that snow over the winter. What are the benefits? - Prevent big snow drifts and icy roads that can lead to stranded motorists - Improve driver visibility and reduce vehicle accidents - Serve as visual clues to help drivers find their way - Reduce use of public money by reducing plow time and heavy vehicle usage - Reduce shipping delays for goods and services - Increase crop yields by 10 percent or more - Control soil erosion and reduce spring flooding by keeping soil sediment out of the ditches to maintain proper drainage - Lessen our impact on the environment with less salt use, fewer truck trips and less fuel consumption - Depending upon the type of living snow fence selected, grassland nesting birds and pollinator habitat is improved to create an oasis for those species to survive and thrive. - Increased opportunities to view and hunt pheasants and other game birds Help us keep snow and blowing snow off your roads Farmers’ civic responsibility and leadership help keep winter roads open across Minnesota by leaving standing corn rows, hay bales or silage bags to protect selected state highways throughout the winter with our living snow fence program. How do I enroll in MnDOT's living snow fence program? How am I paid? First, check to see if your site is eligible for MnDOT’s living snow fence program by contacting your local MnDOT district snow fence coordinator. Your local coordinator will verify the presence of the blowing snow problem along the section of highway adjacent to your property that you would like to enroll in the program. Next, if your site is eligible and you want to enroll in the program, you will need to become a state vendor. This secure, multi-step process will allow you to be paid through the Statewide Integrated Financial Tools system. This system will collect information about how to reimburse you. You can also sign up for direct deposit through the system. To get paid by MnDOT for a living snow fence on your location, complete the online state vendor registration form. (See how to become a state vendor (PDF).) For standing corn rows, or stacked bales, MnDOT would enter into a short-term (one winter season) agreement with you and payment would be made at the end of winter. Corn can be hand-picked, since MnDOT is paying for the corn stalks needed to catch the blowing snow. If a participating farmer chooses to harvest the corn in the spring they are allowed to keep the corn to use as they choose. For living snow fences consisting of woody vegetation, native grasses and wildflowers, the MnDOT district snow fence coordinator will work with you and your local Soil Water Conservation District, USDA Farm Service Agency and USDA Natural Resources Conservation Service. The USDA Natural Resources Conservation Service certifies that the design and plant installation meets their specifications to achieve optimal plant growth and health. Since committing to plant and maintain woody vegetation is a long-term commitment, MnDOT will enter into a 10 to 15 year agreement with you to compensate you annually for storing snow on your property and maintaining the planting. This living snow fence agreement has potential to be renewed after 15 years, pending legislative funding and MnDOT identifying the purpose and need for continued blowing snow control in that area. CP-17A Continuous Conservation Reserve Program Contact your local USDA Service Center to learn more about enrolling in the CP 17A living snow fence continuous conservation reserve program and to get more information on the annual soil rental rates for the land enrolled in the program as well as cost share assistance for installing the practice. The typical contract length is 10 to 15 years. If the 150 foot snow catch area is cropped, MnDOT will provide a 50 percent match on the annual CRP soil rental rate payment from USDA. If the 150 foot snow catch area is planted into a pollinator seed mixture, MnDOT will provide a 100 percent match on the annual CRP soil rental rate payment from USDA. MnDOT provides an annual $155 per acre maintenance payment for each acre enrolled in the CP-17A Continuous Conservation Reserve Program. MnDOT provides cost share assistance to cover 100 percent of the planting costs by partnering with your federal cost share assistance received from USDA. In addition, MnDOT participates in the cost of installing geotextile weed barrier fabric up to $1.00 per lineal foot installed. Additional resources for landowners and farmer operators Learn more about MnDOT’s snow fence program and opportunities to partner with MnDOT to achieve drift free roads. - A landowners guide to living snow fences (PDF) - Growing and maintaining living snow fences (PDF) - Payment structure (PDF) - Standing corn rows improve winter travel brochure (PDF) - Conservation Practice 17A Living Snow Fence - Minnesota CRP Continuous sign-up (PDF)
<urn:uuid:902e2383-ef65-44a5-9b64-952e10db768b>
CC-MAIN-2017-13
http://www.dot.state.mn.us/environment/livingsnowfence/
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00131-ip-10-233-31-227.ec2.internal.warc.gz
en
0.931615
1,180
2.734375
3
2.144002
2
Moderate reasoning
Transportation
Even when the British did win victories against less wary and able rebel commanders, these proved ultimately unimpressive because of the resilience of the rebels’ military forces. For example, in New Jersey in November and December 1776, in Georgia in December 1778 and January 1779, and in South Carolina in May and June 1780, the British managed to defeat, drive off, disperse, or capture the enemy’s forces; they then detached troops to fortify and garrison posts to tie down the occupied territory. Yet in each case the battered rebels recovered remarkably quickly. In the North the unexpected blows that Washington’s dwindling, tattered band delivered at Trenton and Princeton compelled Howe to evacuate almost all of occupied New Jersey. In Georgia and South Carolina, despite shattering British victories at Savannah, Briar Creek, Charleston, and Camden, the rebels were able to keep putting regular and militia forces in the field to contest the reestablishment of royal authority in the backcountry. In such circumstances British commanders must have felt that they were engaged in a never-ending struggle to cut off the heads of a Hydra-like enemy. As the war progressed, the rebels’ military forces gradually gained experience and discipline, despite the Continental Army’s continued dependence on short-service drafts. In part, this was the result of Washington’s deliberate strategy of exposing his regulars to small doses of combat in the petite guerre. As Major the Honorable Charles Stuart put it in 1777: “The rebel soldiers, from being accustomed to peril in their skirmishes, begin to have more confidence, and their officers seldom meet with our foraging parties, but they try every ruse to entrap them. And though they do not always succeed, yet the following our people as they return, and the wounding and killing many of our rearguards, gives them the notion of victory, and habituates them to the profession.” Colonel Allan MacLean was of the same mind: “The rebels have the whole winter gone upon a very prudent plan of constantly harassing our quarters with skirmishes and small parties, and always attacking our foraging parties. By this means they gradually accustom their men to look us in the face, and stand fire which they never have dared to attempt in the field. But this is a plan which we ought to avoid most earnestly, since it will certainly make soldiers of the Americans.” The winter at Valley Forge in 1777–78 marked a major milestone in the tactical effectiveness of the Continental Army, largely due to the efforts of the rebels’ Prussian drillmaster, Major General Friedrich von Steuben. Despite a shaky start, Washington’s regulars performed better at Monmouth Courthouse than in most previous engagements, and the rebel coup against Stony Point the next year was particularly impressive. In January 1780 captive British ensign Thomas Hughes saw a battalion of Continentals marching southward who “had good clothing, were well armed and showed more of the military in their appearance than I ever conceived American troops had yet attained.” Months later Captain John Peebles made a similar observation on the surrendered rebel troops at Charleston: “They are a ragged, dirty-looking sort of people as usual, but [they have] more appearance of discipline than what we have seen formerly, and some of their officers [are] decent-looking men.” During the southern campaigns, disciplined Continental Army corps like the 1st Maryland Regiment demonstrated their increasing ability to repulse British bayonet rushes. Hence the Hessian adjutant general in America expressed surprise at Clinton’s displeasure at the expense of Cornwallis’s hollow victory at Guilford Courthouse: “I myself do not see anything extraordinary in it, for since we made no effort to smother the rebellion at the beginning, when it could have been done at a small cost, the rebels couldn’t help but become soldiers.” After his capture at Yorktown, the sight of rebel troops at exercise particularly impressed captive Captain Johann Ewald: “Concerning the American army, one should not think that it can be compared to a motley crowd of farmers. The so-called Continental, or standing, regiments are under good discipline and drill in the English style as well as the English themselves. I have seen the Rhode Island Regiment march and perform several mountings of the guard which left nothing to criticize. The men were complete masters of their legs, carried their weapons well, held their heads straight, faced right without moving an eye, and wheeled so excellently without their officers having to shout much, that the regiment looked like it was dressed in line with a string.” As the war dragged on, an increasing number of those who were drafted into the Continental Army or called out for militia service, or who offered themselves as paid substitutes for such men, were themselves veterans of previous campaigns. In short, it is difficult to avoid the conclusion that, by the end of the war, the best of the rebels’ regular corps were tactically every bit the equals of their British counterparts. The resilience of the Continental Army was central to Britain’s eventual failure in America. Eighteenth-century military convention dictated that the army that held the field at the end of an engagement had gained the victory. By this standard Crown troops won the great majority of the engagements of the American War. Yet while battles like Bunker Hill, Freeman’s Farm, Guilford Courthouse, Hobkirk’s Hill, and Eutaw Springs were all unquestionably British tactical victories, they were simultaneously strategic reverses on three counts. First, they cut deeply into Britain’s limited military manpower. Second, they did not neutralize the rebels’ field armies. Third, they failed to convince colonial public opinion that Crown forces were invincible. For example, if Cornwallis had intended that a victory at Guilford Courthouse would prove the superiority of His Majesty’s arms and thereby rally the people of North Carolina to the royal cause, then the result impressed few. As the earl sadly reported, in the aftermath of the action, barely one hundred locals were willing to come out in arms in his support: “Many of the inhabitants rode into camp, shook me by the hand, said they were glad to see us and to hear that we had beat Greene, and then rode home again.” Furthermore, as the war unfolded, the political dividend that Crown forces gained from clear operational or tactical successes against the rebels proved less and less potent. Probably the best example of this was Cornwallis’s triumph at Camden. In the first flush of his victory, the earl optimistically predicted, “The rebel forces being at present dispersed, the internal commotions and insurrections in the province will now subside.” But when (as Lord Rawdon put it) “the dispersion of that force did not extinguish the ferment which the hope of its support had raised,” Cornwallis rationalized the failure of his prediction by suggesting that “[t]he disaffection . . . in the country east of [the] Santee [River] is so great that the account of our victory could not penetrate into it, any person daring to speak of it being threatened with instant death.” Rawdon’s assessment was more straightforward: “The approach of General Gates’s army unveiled to us a fund of disaffection in this province of which we could have formed no idea. . . . A numerous enemy now appears on the frontiers drawn from Nolachucki and other settlements beyond the mountains whose very names have been unknown to us.” Rawdon’s admission that the British had simply been oblivious to the scale, intensity, and persistence of popular hostility to royal authority in South Carolina was reminiscent of Burgoyne’s obvious alarm three years earlier, when he reported from New England that “[t]he great bulk of the country is undoubtedly with the Congress in principle and in zeal” and that New Hampshire, “a country unpeopled and almost unknown in the last war, now abounds in the most active and most rebellious race of the continent and hangs like a gathering storm on my left.” In short, if British military successes impressed the undecided, they did not intimidate inveterate rebels, whose numbers and determination Crown commanders gradually came to realize they had drastically underestimated. If British commanders ultimately did not reap the expected political fruits from their military successes, their armies’ unhappy interaction with the population at large certainly wrought massive political damage. Among the leading causes of this alienation were the unauthorized employment of “fire and sword” methods by some hard-line officers and the nefarious misdemeanors committed by the rank and file, including theft and rape. In some ways British soldiers proved excellent recruiting agents for the rebel cause. Even if British military successes had encouraged the rebel leadership to sue for peace and the majority of the population to acquiesce in the restoration of royal government, it is far from clear that this would have signaled the end of the conflict. Instead, it is quite possible that the British would have found their authority still contested at a local level by inveterately hostile sections of the population (much as occurred over a century later during the guerrilla phase of the Second Boer War). The lawlessness that wracked the “no-man’s-land” around New York City for much of the conflict, and the bloody civil war that ravaged Georgia and the Carolinas when the strategic focus shifted to the South, were surely a foretaste of what must have happened had the British succeeded in defeating organized rebel resistance across the continent. It is difficult to see how Britain’s limited military resources could have successfully overcome such a state of universal anarchy. While Crown forces won the great majority of the battlefield engagements of the American War, the fruits of these victories were too limited to decide the outcome. Certainly it was beyond the powers of the British to “destroy” rebel field armies on the battlefield. This was because rebel commanders generally succeeded either in evading battle under unfavorable conditions or in escaping the worst consequences of a defeat. They managed the latter feat because the British were generally incapable of mounting an effective pursuit to disrupt or interdict the flight of the vanquished from the battlefield. As Major General the Chevalier de Chastellux put it, “it is not in intersected countries, and without cavalry, that great battles are gained, which destroy or disperse armies.” As the struggle dragged on, and despite repeated reverses, the rebels’ military forces gradually gained in experience and discipline to the point that, by the end of the war, the Continental Army’s best corps were able to meet the King’s troops on the open field on more or less equal terms. This made British victories all the more difficult and costly. Additionally, the British appear simply to have overestimated the political worth of military success. While their Pyrrhic tactical victories predictably failed to convince many Americans that Congress was doomed to defeat, neither did great victories like Camden persuade inveterate rebels to abandon the cause. Had the rebel leadership given up the struggle, and had the mass of the population resigned themselves to the restoration of royal authority, it is likely that these incorrigibles would simply have made America ungovernable.
<urn:uuid:250cfd5b-6c18-4cac-a5ca-e8feeb28fdf2>
CC-MAIN-2018-39
https://weaponsandwarfare.com/2017/03/03/the-resilience-of-the-rebellion/
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00292.warc.gz
en
0.972071
2,355
3.390625
3
3.001151
3
Strong reasoning
History
I have two PLC processors. I want to communicate both processor with each other. How i can do so?1571 what is thermo stat, explain how its working?1 2679 what is feeder?2 3728 what is the problem with single phase AC motors and how is it overcome?2 3408 after syncnhronistion how generator speed controlled by grid3 3530 what is the significance of 3.5 core cable? or what does this half core mean and why is it required?5 14159 single phase can run in PVC conduit. Three phase can also run in one PVC conduit. But two phase cannot run in one PVC conduit. WHY? reply plz....awaiting your answer. my company total load is 1000kw, 415volt, all load are inductive load. incoming powerfactor is .8. we want to have0.95 pf connecting capacitors. so now how much rating of capacitors we have to connect? show how?3 4911 If the t/f capacity is 500KvA then what is Amps ratting of the main breaker.5 5203 what are the effects of variations in frequency when you are connected to national grid??1 2850 In a transformer, the hysteresis and eddy current losses depend upon 1.load current 2.maximum flux density in core 3.type of lamination 4.supply frequency The correct statements are (a) 1 and 3 (b) 2 and 3 (c) 1 and 4 (d) 2 and 49 9879 how to deside the capacity of chock for drive?1441 Why let through energy not considered for ACB breaker while cable sizing? how to identify conection leads of an unknown electric motor Does outdoor isolator loose connection cause short circuit current in that circuit? if yes then how ?? completed BE & done apprentice at tneb & having experience 6 years. want to get c license. will apprentiship & BE be helpful for getting c license. will i get any preference by showing this? in ACB breaker requireding 3200Amps. buscoupler getting OFF mode and again it reset and getting normal position give me a required answer? How many turns are there in primary winding? (In Transformer) Can XLPE/SWB/PVC (P3) 7C X 2.5mm2 be replaced by Can XLPE/SWB/PVC (P2) 7C X 2.5mm2??? P3 vs P2 what will happen if OLR CT in all phases are short circuted What are the applications of different multistage amplifiers?Plz also write advantages of these amplifiers? Thaaaaaaaaanx what are the factors to be considered while design the breaker either HT or LT?? two phases with neutral supply can be taken in 3C X 10 sq.mm cable . how can see the ac indoor or out door ac which Tr Why we make earth connection with Transformer Body and What will be the minimum size of copper wire as per code with respect to KVA rating of Transformer? We have 5 Distribution sections with variable loads&We have 3 Gensets(500kva,380kva&250kva) our total load is between 400kva to 600kva. so how could we connect 3gensets. How to calculate wire length,size & current? ( related to control panel)
<urn:uuid:75f1a6f9-e4be-4904-9bb3-3b6057dbad6d>
CC-MAIN-2023-06
https://www.allinterview.com/interview-questions/82-616/electrical-engineering.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00327.warc.gz
en
0.890322
737
2.5625
3
2.245111
2
Moderate reasoning
Industrial
The following article features the details about the job of an EMT, including their duties, training and qualifications, skills, and more. We will also go through the working hours and career opportunities that come with the position. And EMT stands for Emergency Medical Technician who is the first responder to emergencies. They provide non-invasive basic care in such situations as violent crimes, car accidents, etc. After providing first aid and stabilizing the patient with the help of other first responders, such as firefighters, police officers, and paramedics, EMT transport them to the hospital. Article Table of Contents What Does an EMT Do Emergency medical technicians are dispatched to emergencies through 9-1-1. Usually, they work alongside paramedics and firefighters to provide basic care to ill or injured people. Since emergencies can happen at any time, an EMT works at any time of the day or night, as well as on weekends and holidays, depending on what their schedule is. With a fast-paced and stressful work environment, EMTs have a median salary of $32,670 per year. - Arrive at the emergency sites and transport patients to the hospitals. - Assess the situation and call for backup if needed. - Create the environment safe for bystanders or traffic if needed. - Determine which patient’s or patients’ conditions are critical and call for care first. - Provide first aid and emergency care when necessary. - Safely place patients on stretches and secure them in the ambulance. - Provide medical care on the way to the hospital as needed. - Communicate with the medical facility regarding the patient’s injuries and medical needs. An EMT can be exposed to car accidents, violent crimes, and other scenes regularly, so they need to respond well in a stressful situation. They have to be able to maintain their mental health. EMTs have to make decisions quickly and under huge pressure. Decisions should be correct and confident since they are made in life-or-death situations. When evaluating the condition of a patient, EMTs need to notice any signs of serious injury or illnesses, maintain accurate records, and apply medications with the right dosage. An EMT should communicate with hospitals, dispatchers, coworkers, as well as distraught patients and bystanders. Communication skills are important to convey the right medical information and help others stay calm. How to Become an EMT Aspiring EMTs need a high school diploma or GED together with completed training specific to the job. Upon completion of training, they need to pass a written and practical examination to get a certification. More detailed information on qualifications, training, and experience is listed below. Training and Qualifications An EMT is an entry-level job, so only a high school diploma or equivalent (GED) is necessary for the candidates. However, additionally to that, prospective candidates have to complete EMT training. It comes in three levels: Level 1/EMT Basic This is the first level to complete before a person can work as an EMT and before they move on to other levels. At this point, candidates learn basic first aid such as stopping bleeding, setting broken bones, dealing with respiratory and cardiac emergencies. The training on this level will include clinical labs and fieldwork. Upon completion of 120 to 180 hours of training, candidates should pass a written and practical state-certified exam. They will get a certification of an EMT-B. Usually, it takes about 6 months to complete the entire level. Level 2/EMT Intermediate This level comes after the completion of the Basic one, and it includes an extra 30 to 350 hours of training. Candidates learn how to use more advanced medical equipment and how to administer medications and intravenous fluids. A paramedic is the highest level an EMT can reach. Paramedics are typically trained at community colleges. The degree lasts for 2 years. The training and examinations are more advanced and definite and include 1,200 to 1,800 hours of training. After the training is completed, a candidate will have to pass a state licensing examination. Qualifications and certifications for EMTs or paramedics vary in every state. Starting a job as an EMT doesn’t require work experience. The training that candidates undergo provides enough experience to start a job. However, having some experience with emergency-type situations can be beneficial. One of the most essential qualifications for an EMT is the ability to stay calm and make the right decisions in stressful situations. The position of an EMT is typically full-time featuring 40+ hours a week. Although the week may last longer than 40 hours, more than 50 hours per week aren’t that common. If you are looking for a 9-to-5 job, the position of an EMT won’t be the best fit. Since emergencies are unpredictable and can happen at any time, EMTs work various shifts, including overnight, weekends, and holidays. Shifts last from 12 to 24 hours. Usually, EMTs work set shifts on specific days, with a specific number of shifts per week. Even EMTs of lower level make a decent salary, around $32,000 or more, depending on experience. Paramedics can earn over $70,000. In terms of finances, this is a great job. With high stress and constant traumatic events, EMTs are prone to burnouts. With the long hours on the job, it can be hard balancing work and personal life. EMTs should be certain they can handle this job both emotionally and personally. The job outlook for the profession is positive. There is always a need for emergency services. Also, with an aging population of baby boomers, the demand only continues to grow. With the three levels of EMTs, they always have room for advancement, from basic, to intermediate, and to a paramedic. If paramedic EMTs wish to advance even further, they can become administrative directors, managers, or teachers. Many EMTs also use this position to later advance to higher medical professions, such as nurses, physician assistants, and doctors. The job of an EMT is for those who can withstand stress and balance life and work, but it’s a highly rewarding profession. EMTs can help people while having multiple advancement opportunities and receiving a decent salary. So the benefits on the job may outweigh the stress and difficulties involved.
<urn:uuid:176fe77f-3ff0-485f-8e4b-c5763871a230>
CC-MAIN-2020-40
https://www.vocationaltraininghq.com/job-description/emt/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00706.warc.gz
en
0.937545
1,376
3.390625
3
1.353256
1
Basic reasoning
Education & Jobs
Planning like a pro isn’t something that comes naturally for all teachers. The way you learn to lesson plan in colleges doesn’t always translate well into your own classroom. It’s not just about filling out templates; it’s about really integrating content and standards into your organized lessons. In addition, lesson planning is about organizing your time wisely. Lesson planning can feel overwhelming, but it’s even more frustrating when you don’t have plans ready-to-go. Learning how to plan like a pro can help you feel confident teaching content and working with your students to meet their needs! Here are some lesson planning tips you won’t want to be without! Weekly and Long-Term Planning Tips Planning your lessons is not just a daily task. In fact, setting up lessons that incorporate weekly, quarter, semester, and year-long plans saves you time and helps you focus on long term goals for yourself and your students. Planning long term is easiest when you have a system or organization (for time and materials) in place. These and other tips are available in the Classroom Organization Academy! Set Up a Clutter Free Planning Station Make sure you have a place to plan that only has what you need for the tasks at hand. Clear away other distractions and keep what you need at your fingertips (or at least on a nearby shelf). These materials include: - Teacher Editions - Grade level standards - Pacing guides - Long range planning sheets - Previous plans from the year before to reference what you’ve done in the past - A portable, sturdy teacher bag - Lesson planner (digital or paper) Learn more about setting up spaces to stay organized in the Classroom Organization Academy! What is Batch Lesson Planning? Batch working is not only something used by teachers. Experts say that doing a ‘batch” of common tasks in one session saves time and helps you focus. Planning pros use batch lesson planning to effectively plan long-term. Here’s how it works! First, take one subject and plan out multiple days/weeks worth at a time. If I can plan out two units or even a full month of content, the better. It can seem daunting at first, but the hardest part is really just sitting down and getting started. Start by writing out the plans, making all of the copies, prepping all of the materials, or uploading all of the resources that will be needed. Benefits of Batch Lesson Planning Big Picture Planning Batch lesson planning gives the benefit of knowing the bigger picture through long range planning. Plus, you are focused on planning content, not the other stuff! You actually save time because you are not switching your time in changing tasks and context switching. In addition, you enjoy stress-free weekends filled with doing the things you love. This is all because your lessons are planned ahead of time! Being prepared for your students with lessons is so important! With batch lesson planning, copies are made and ready for the week You’re prepared to walk in the door and teach on Monday! Get Started Lesson Planning Lesson planning doesn’t have to be stress-inducing. Here are some easy tips to get started planning your weekly and monthly lessons! - Schedule a SET TIME (pick a date) and eliminate your distractions and just plan! This might be 1-2 hours. Ideally, you are going to plan out 1 month of lessons. - Pick another day to plan out the following few months or weeks. Once you start this system, it becomes so much easier! - Have your standards accessible. Use backwards planning because this is what you will need to be teaching. Ask yourself, what are your kids working towards? Check out Understanding by Design resources regarding Backwards Design. Pick activities to address the standards you plan. - Give yourself some wiggle room. Add 2 floating days into your lesson plans-leave two days blank, those will be days that you can push certain lessons back. - Pick one day to make copies for the entire next week. If you can have parent volunteers in your classroom, utilize them to make copies! - You’ll feel so much better and save time because you’re only making small adjustments to this planning! Store Lesson Planning Materials Where do you keep all your weekly materials and Teacher Editions to keep the clutter at bay? Try some of these ideas for storage and organization in your classroom! - Utilize bins for subjects (ELA, Math, Science etc.) and drop everything you need for the week or the unit in the bin. - Have portable or permanent cabinets in your room? Use the drawers for subjects like you would use bins. - One affordable, simple way to organize papers and lessons is with magazine Boxes for days of the week. Use them on their sides or standing up. - Try colorful hanging files for days of the week to store lesson plans, master copies of handouts, or student samples. - Have manipulatives or student supplies on a cart or tray (ready to be used). Organize Lesson Plans for Substitute Teachers Not only do your lessons need to be organized for you and your students, you also need to make sure they are ready for subs! Your sub will be less likely to waste a day with a video or busy work when you have your lessons organized digitally and ready-to-go! Creating easy digital sub plans will save you time when you’re not feeling up to planning last minutes. Use digital sub plan templates via Google Slides to save you and your sub time! For paper materials, use a sub tub to house all your lessons and things that a sub needs throughout the day or for long terms sub jobs. Become a lesson planning pro with these tips to organize materials, batch lesson plan, and save time! Join the Classroom Organization Academy for tips to keep your physical and digital classroom organized so you can get back to doing what you love, teaching students! What are your favorite teacher tips for lesson planning?
<urn:uuid:bc4ef5c2-0bf0-4e32-b114-f3036ed0f37e>
CC-MAIN-2021-39
https://www.sailingintosecond.com/learn-to-plan-like-a-pro/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00363.warc.gz
en
0.925494
1,270
3.046875
3
1.435348
1
Basic reasoning
Education & Jobs
These traditional hunter-gatherers are unlike any other tribe. The Hadzabe of Tanzania are a remarkable example of living history and probably unique in their genetic heritage. Tanzania Travel Guide | Living History | Meet the Hadzabe The Hadzabe TribeThe Hadzabe way of life has changed very little. For thousands of years these people have been full-time hunter-gatherers and this makes them the last of their kind in Africa. This tribe is not closely linked to any other by language or genetic heritage. Living off the land on a full-time basis is extremely difficult in this day and age. Attempts by various authorities to introduce an alternative way of life have mostly failed and the Hadza continue in the manner of their ancestors, despite land encroachment and mounting pressure from surrounding communities. The Hadzabe rely on free ranging game for hunting and the gathering of wild foods such as berries, honey and tubers. They source water from a few water holes and other natural means. The tribe consists of a number of bands that move around from place to place according to seasonal changes and the availability of food and water. A camp can be set up and shelters built in just a few hours, likewise individuals can pack up their belongings and carry them on their backs when they need to move. Movement can also be dictated by disagreements in the camp, illness or death. There is no dominance in camp of one adult over another, everyone has the same status. HadzalandThe lands around salty Lake Eyasi in the Serengeti eco-system is where you will find the Hadza people. This area is located near Olduvai Gorge and Laetoli, which are both deeply significant archaeological sites in the Central Rift Valley for the study of early man and his origins. Pre-historic evidence indicates that hunter-gatherer communities have lived in this area for at least 50 000 years. It is likely these early people are the ancestors of the Hadzabe. Cultural TourismSpending time with the Hadza people is quite a culture shock. They don't live by the same rules or follow time as we do. They have an oral tradition when it comes to remembering their past and do not use calendars, time, or counting past 3 or 4 as a form of measure. They have never experienced wars, famines or serious disease outbreaks and they have no carbon footprint. They survive by their instincts and skills and their ability to interpret nature's cycles and signs. They can walk confidently through the deepest bush at night and be completely at ease. They are the living reminders of how humanity is thought to have lived from as far back as 2 million years ago. The advent of farming in the outside world is a very recent blip on this timeline. You can enjoy a truly fascinating visit with these gregarious people and experience life as a hunter-gatherer. On hunts with the Hadzabe you'll see how they stalk their prey, using bows with arrows that are poison tipped, and how they interact with the Honeyguide bird to find a beehive. You'll also learn how to forage for wild fruits and tubers as well as medicinal plants. Everyone is relaxed around the campfire after stories and songs and on moonless nights they dance, there are no worries or thoughts about anything else except the present moment - they remain free and live well in the bounty of nature, what more could they want? Enquiries / Questions
<urn:uuid:16c14199-ca97-43ae-9131-3bf6e5d5021c>
CC-MAIN-2019-51
http://www.safari.co.za/Tanzania_Travel_Guide-travel/living-history-meet-the-hadzabe.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00077.warc.gz
en
0.972036
711
2.890625
3
2.517098
3
Strong reasoning
History
"Concerning Virgins" was one of Ambrose's first published works after his conversion and consecration as a bishop in 374AD. The essay purports to be written to his sister Marcellina, who was consecrated as a virgin in the basilica at in 352AD, on the anniversary of St.Agnes' martyrdom. Agnes of Rome, legend has it, was a consecrated virgin who was martyred in 304AD for her faith as well as for her refusal to be wed to the local prefect Sempronius. Legends surrounding the manner of her martyrdom include her hair growing to cover her naked form, being dragged along the street to a whorehouse, and those who attempted to defile her being blinded or struck dead. The essay itself is in the form of a panegyric (eulogy) that praises the pious (1-4): Why Ambrose is authorized and empowered to write on this topic despite his own failings. Agnes as a new kind of double martyr (5-9) superiority of Christian virgins (10-19) tradition of holy virginity (10-13) tradition of pagan (vestal) virginity (14-19) advantages of virginity (20-31) downside to married life (23-27) foolishness of pursuing physical beauty and fashion (28-30) as immaculate virgin the concerns of parents (32-34) superiority of virginity (35-45) virginal beauty (35-36) ruling queen and spouse (37-38) of their garments (39) unstained bee (40-42) flower of the field (43-44) garden enclosed (45) to stay a signet of the Lord (46-51) chastity of angels (52-53) and dowries (54-64) of possessions (54-55) of parents trying to prevent their daughters (56-58) praise and renown for the family (59-61) repay for the sacrifice (62-64) example (65)--"Young girls, you see the reward of devotion. Parents be warned by the example of obstruction." virginity commendable? Under what circumstances? virginity beautiful? How so? virginity superior to marriage? Why or why not? Can one be espoused to Jesus? What would it entail? Do you find any of Ambrose's reasons convincing? Why or why not?
<urn:uuid:655d6c33-fcbd-49c1-8269-c5f82aa87430>
CC-MAIN-2014-10
http://www3.dbu.edu/mitchell/ambrosevirginity.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999636668/warc/CC-MAIN-20140305060716-00001-ip-10-183-142-35.ec2.internal.warc.gz
en
0.901767
556
2.8125
3
2.755944
3
Strong reasoning
Religion
In the earth day tradition, I thought I’d make a list of personal actions that can help save the planet – one that skips over the usual lime green “buy a hybrid and recycle” message and gets right to the good stuff. What follows, then, is my list of ten not-so-inconsequential things we can all do to green up our act, and live a healthier and happier life in the process. 1. Know your footprint. Knowledge is power, and the first step in reducing your impact is educating yourself. It’s not always easy to retrace the consequences of our actions; the extraordinary complexity of global material flows tends to obscure the ways in which our everyday lives are unsustainable. But this much is certain: flying, driving, creating trash, and eating meat are the lifestyle choices with the biggest footprint. 2. Demand more from the places where you shop. Ask the manager of your local big box store why they’re still using plastic bags, now that cities are starting to ban them. Request FSC-Certified paper at the copy shop. Inquire about the growing conditions of the food at your favorite restaurant. These kinds of signals actually make a difference to retailers, especially those keen on improving their green cred. 3. Watch your waste. On average, Americans generate 4.5 pounds of waste per day, most of it ending up in a landfill. The vast majority of that waste can be eliminated through composting, recycling, buying products that aren’t overpackaged, and creative re-use. I always try to keep a few plastic bags stashed away in my backpack, for example. 4. Question consumption. The greenest product is the one you don’t buy. To be sure, consumerism is a tough addiction to break: hundreds of billions of dollars are spent every year to convince us that our wants are needs. But the less you buy, the less you realize you really need, and nothing beats the empowering feeling of taking the means of production back into your own hands. 5. Localize. It’s a good idea to be aware of where everything you buy is coming from, but it’s especially important for food. Subscribe to a CSA and get local, organic produce every week for less-than-Whole Foods prices. If you have a yard, take sustenance into your own hands and begin to make it agriculturally productive. 6. Get to know your neighbors. As much as it’s about energy, waste, or food, sustainability is about rebuilding our communities. Introduce yourself to the family down the street. Hold a potluck or a block party, and get people engaged about the environmental civilizational issues we’ll all face in the next few years. 7. Unplug. Our technological society is constantly pressuring us to keep up with the latest news, tunes and celebrity gossip – but how much do these things actually improve our lives? Discover the art of unplugging: encourage creative distractions rather than passive ones, and favor the real world over the screen. You’ll be amazed at how much more you feel connected to the people and places around you – not to mention having a lot more time on your hands (just ask no impact man). 8. Put your money where your mouth is. If you have investments, make sure they’re in line with your social and environmental values. Green investment blogger Tom Konrad is just one of many great resources out there for greening your portfolio. Sometimes, though not always, SRIs have a slower rate of return. But look at it this way: would you rather have a 20% yield in 5 years, or a habitable world in 50? 9. Know your ecosystem. Most of us live in urbanized areas, where nature often seems all but invisible – but that doesn’t mean it’s not there. Seek it out. Learn to distinguish the native species of your area (plants too!), and pay attention to patterns of the wind, water and seasons. 10. Get in touch with the big picture. Until we understand our larger role in the history of humanity and the universe, we’ll be stuck in the destructive, short-term mode of thinking that got us here. Whether it’s through science, meditation or prayer, take a few minutes each day to get some perspective on your place in the bigger scheme of things, and let the insights you gain guide your day-to day actions.
<urn:uuid:5b7c3a04-f0ea-439a-83e7-28aefe7cc3f3>
CC-MAIN-2018-26
https://wildgreenyonder.wordpress.com/2007/04/21/the-obligatory-wgy-earth-day-top-ten/
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.13/warc/CC-MAIN-20180618070542-20180618090542-00231.warc.gz
en
0.931383
937
2.53125
3
2.548278
3
Strong reasoning
Science & Tech.
A dog named Capitan recently passed away in Argentina at the age of 15 after spending 11 years waiting by his owner’s grave every night. Cemetery staff and neighbors made sure he had food, and even when his family took him home, Capitan would always end up back at the cemetery at the same grave at six o’clock where he would lie down all night. Capitan’s story is touching, and it’s not the only story of dogs mourning the loss of their owners. Deta is another dog that refused to leave her former owner’s grave when her family took her to the cemetery, and the video of her running back to the site and lovingly lying down next to the tombstone is both heartwarming and heartbreaking. Why do some dogs recognize their owners’ resting places, and why do they stay there? Do the dogs expect their owners to return? Are they waiting at the last spot they were able to get their owners’ scents? Can dogs understand death, and is there a spiritual component to their mourning? Do Dogs Understand Death? Research indicates that dogs are able to feel very deep connections to humans, and they experience many emotions similarly to the way we do. As dog lovers, we already know that the love our dogs feel for us goes way beyond the need to have a food provider, and science backs up that claim. In a study published in Behavioral Processes, researchers found that the part of dogs’ brains that light up when they detect their owners’ scents is the same part of the human brain that reacts to visual beauty and is associated with the early stages of being in love. Additionally, Stanley Coren, a psychology professor at the University of British Columbia, says that research shows that dogs have the mental capabilities of a two to three-year-old human child. So clearly, dogs are able to feel the pain of loss very deeply, and they have some ability to mentally process and react to that loss. However, whether they can understand the finality of death is not really clear when it comes to science. Dogs can easily detect the scents of their owners, and human bodies, especially when they are not embalmed, emit many different chemicals that dogs can pick up with their noses. Dog can certainly smell and understand that there is a difference between a living body and one that is decomposing, but do they know that their owner isn’t coming back to their body? Do they know that death is irreversible? Do Dogs Expect Their Dead Owners To Return? There are several stories of dogs waiting for their owners to return, even after their humans have passed away. One of the most famous is Hachiko, a dog in Japan who would wait every day at the train station for his owner to return and continued to wait every day for ten years even after his owner passed away. Stories of dogs waiting in vain for their owners have even made their way into pop culture, like a particularly notorious episode of the show Futurama, where a dog lives a long life waiting for his human to come back, but he never comes. These dogs were waiting out of habit. They had learned where to expect their humans to be and waited at the last place they saw them. Dogs that wait at their owners’ graves may be waiting at the last place they detected their humans by scent. In fact, they may be able to detect the scent of their owners’ bodies even after they are buried with their super noses. Coren, the psychology professor, believes it is likely that dogs hold out hope that their owners will simply return–not as corpses, but as they always were in life. He says they don’t understand that death is final, and states, “I hate to say this – but in some respects they may have it better than we do, because at least they still have that glimmer of hope.” Why Do Dogs Mourn? If dogs really can’t understand that death is final, why are they in mourning? If their owner can come back at any time, why do they seem to grieve so deeply? Well, even if dogs can’t understand that death is final, they can certainly feel loss and have very extreme reactions to that loss. Anyone who has taken care of a dog as a pet sitter can tell you dogs go through a range of emotions and behaviors when their owners are no longer around. Some dogs go on hunger strikes when their owners leave for long periods of time. Some have anxiety attacks, some get physically sick, and some wait by the door at the same times every day expecting their humans to walk in like they always did. Is that really any different than waiting by a grave where they know their human’s body is buried? Dogs aren’t just upset that the people who usually meet their basic needs by providing food, shelter, and safety are gone. If it were about basic needs, dogs would attach to anyone who cared for them as soon as their owners left the door and a new, capable caretaker arrived. Eventually, they might grow to trust the new human and even love them, but there’s an adjustment period and a time of mourning. Dogs have a connection to their humans. It’s love, and when their humans are gone, dogs get lovesick. So even if a dog can’t understand that a person is gone forever, they can certainly understand that a person is gone. But even though science can’t really tell us if dogs truly understand death, many of us have individual experiences and spiritual beliefs that influence the way we feel about dogs and their grief. What About The Spiritual Side And Individual Experiences? I had a Dachshund named Skippy. He was 16 years old when he came to live with my family, and he had already had two previous owners that had passed away. We got Skippy from his previous owner when she was dying of brain cancer, and Skippy had belonged to her mother before her. She told me a story that has influenced my beliefs about dogs and their ability to understand death. She said that when her mother was on her death bed, Skippy was close by. At the exact moment that she took her last breath, Skippy cried. It was a loud cry, and a sound that Skippy had never made before that moment and never made after. She believed that Skippy detected her mother’s spirit leaving the body, and that Skippy knew her time had come. Maybe that’s true. Maybe dogs can tell when a spirit has left the body, and maybe dogs that wait at their owners’ graves are waiting for that spirit to return. Maybe they know that the end of life is not really the end. I don’t personally believe in spirits or souls. I think dead is dead. However, I also don’t believe Skippy’s crying out was a coincidence. It seems to me that he could tell that there was a change, and that his owner who was there one second was gone the next. Ultimately, we don’t really have a way of knowing if dogs understand death or what they sense near their owners’ graves. We can’t just ask them, and it’s very difficult to come up with a reliable way to experiment and quantify their grief scientifically, much less comprehend how intricate it may be. Maybe when we look at a mourning dog next to a tombstone, we are projecting our own experiences of loss and misinterpreting what we see in dogs, or maybe we’re exactly right to think that they know all too well what is going on and that they’re experiencing the same pain we do when our hearts are broken. Dogs can sense things that we can’t–we know that for sure. So maybe they know something that we don’t instead of the other way around. What do you think? Do dogs understand death and mourn at their owners’ graves because of it? Are grieving dogs just waiting for their owners to return? Let us know in the comments below!
<urn:uuid:97e821cd-0253-4df6-885d-4160a153a46d>
CC-MAIN-2018-30
http://dogtime.com/lifestyle/63037-dogs-stay-owners-graves
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591831.57/warc/CC-MAIN-20180720193850-20180720213850-00073.warc.gz
en
0.988969
1,681
2.671875
3
3.050668
3
Strong reasoning
Home & Hobbies
It is only now - as we are able, for the first time, to house a book inside something more resilient and more expansive than paper - that we may take full advantage of employing both methods at the same time, thereby allowing readers to truly understand the original text. - Daniel (Creator) About Tailored Texts Tailored Texts is a free, collaborative project with the aim of fostering the creation and sharing of personalised, context-based definitions, footnotes and annotations for the thousands of public domain e-books available online. Since the advent of the internet, free dictionaries, automatic translators and specialist language sites have transformed language learning and hence the facility with which one can read literature in a foreign language. Definitions are available, freely, quicker and in many new forms. Nonetheless, it is still the case that most on-line dictionaries (like their off-line equivalents) still only provide a long list of words - most of the time without sufficient context. That is the nature of a dictionary - designed to serve all contexts, rather than a specific text: the definition on offer is not tailored to the text.* Conversely, the other great tool used to help the reading of foreign literature - full translations (on-line or off-line) - offer just one specific translation at a time where, as any linguist knows, many may well exist. Therefore, when using support materials to read literature we either have to trawl through large entries to find a translation that applies or, alternatively, we are provided with just one solution. At Tailored Texts, our mission is show that a third way is now possible. With our notes tool, we aim to build up a bank of multiple translations of any given word or phrase in their specific context - a work of literature - as well as the justifications for each choice. Traditional and modern support materials will be provided to help. What's more, there will be space to analyse anything of grammatical interest and a place to write more general comments on the literature and author themselves. These notes can be written in any language. In other words, individuals reading the same book - who might otherwise, all alone, write notes on paper margins or underline unknown words to be relegated at a later stage to note pad - can now collaborate with other linguists when translating vocabulary or writing notes. This can only lead to a better comprehension of the text whilst the process of note-adding itself will prove to be hugely rewarding discipline for any linguist who reads to learn. Welcome to Tailored Texts! * The most notable exception is, of course, Wordreference.com, a marvellous site whose well-populated forums provide have allowed for more context and discussion of the language.
<urn:uuid:deae1791-b8e5-4fb5-a570-7ab65150b0bd>
CC-MAIN-2017-17
http://www.tailoredtexts.com/page/about/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00634-ip-10-145-167-34.ec2.internal.warc.gz
en
0.924165
564
2.671875
3
2.859677
3
Strong reasoning
Literature
European models promote fidelityTweet Laurent Bernardin, chief scientist and vice president of R&D, Maplesoft Physics-based modelling – or physical modelling – for virtual prototyping of engineering products has brought about dramatic savings in time and cost over the past 20 years. Furthermore, the increasing use of controllers in engineering products has driven the use of physical modelling tools for accurate plant characterisation, which is usually the first and often the most timeconsuming stage in control system development. Accurate prediction of the behaviour of engineering systems, through the use of powerful mathematical modelling tools, can save millions of dollars in the prototyping and production stages of a product. This has motivated many engineering organisations to invest heavily in model-based design and simulation tools. However, it is becoming apparent that existing modelling tools fall short of what is required to do this effectively, and physical modelling of engineering systems has become a hot topic among engineers as they hit these limitations. Fortunately, a new wave of methodologies, technologies, and products is developing to address the issues faced by engineers, and one particular European initiative is emerging as the leader in this movement. But first, a look at the limitations in current practices is required. If you consider the history of engineering modelling and simulation, you will note that the block-diagram approach employed by some tools has changed very little in more than 50 years. In our opinion, the signal-flow paradigm it uses is a legacy from the days of the analogue computer. As pressures grow on their time, engineers are now finding this approach to physical modelling to be onerous because of the time and effort required to manually prepare the model for representation as a block diagram. The approach is also inherently weak in certain computational respects, such as poor handling of algebraic loops. If you need a powerful illustration of these limitations, try using a block-diagram approach to enter an electric circuit! To address these issues, a new approach to physical modelling is emerging from a collaboration between several European universities, tool vendors, and industrial partners. The Modelica Association (www.modelica.org) was started in 1996 as an initiative to develop a standard model-definition language that would allow convenient, component-oriented modelling of complex engineering systems requiring the inclusion of multiple domains such as mechanical, electrical, electronic, hydraulic, thermal, control, electric power, or process-oriented subcomponents. Modelica models capture and manage all of the necessary relational, physics, and mathematical information for complex systems. Because it is better suited for handling the mathematical framework of model development, Modelica makes detailed models easier to develop. The Modelica language allows use of an object-oriented representation that permits a very easy definition of a system model by graphically describing its topology: simply put, users connect components and define how they are related without having to worry about which signals are inputs and which are outputs. This means, for example, that an electric circuit (a classic example of a topological representation) looks like an electric circuit on a computer screen: this circuit can then be easily connected to a mechanical system model through motor models, shafts, gears, and so on. To introduce a little jargon, this topological approach to model definition is called ‘acausal’ and lifts many of the restrictions imposed by the signal-flow, or ‘causal’, approach. This has made the mathematical formulation of system models very easy, but has led to some challenges in running simulations. Causal block-diagram tools only need to solve systems of ordinary differential equations (ODEs), but acausal modelling introduces a different class of mathematical model: Differential Algebraic Equations (DAEs). These are systems that include both ODEs and algebraic equations that are introduced by added physical constraints. Depending on the nature of these constraints, the DAE problem increases in complexity, usually indicated by an increase in the DAE ‘index’. The development of generalised solvers for high-index DAEs is the subject of a great deal of research, and it is acknowledged by leaders in the field that symbolic computation will play a major role. My company has been actively engaged in developing DAE solvers that incorporate leadingedge symbolic and numeric techniques for solving high-index DAEs, for many years. Until now, the use of Modelica has been largely focused within European companies that were early to adopt this new modelling methodology, and it is beginning to impact mainstream engineering there. However, word is spreading in North America: there is a growing move towards offering modelling tools that use the topological modelling approach, described above, for multidomain systems, and we’re hoping to lead that charge with the launch of a new product later this year. One of the early proponents of Modelica, Dr Michael Tiller, VP of modelling research and development with US engineering consulting firm Emmeskay, said: ‘Modelica was started as an effort to develop a non-proprietary approach to modelling. The goal was to make modelling an open process allowing free collaboration between industry, universities, and tool vendors. As the growth of the internet has shown, open standards are much better for consumers than so-called “walled gardens”.’ After 50 years, we believe the signal-flow block diagram is coming to the end of its useful life for physical modelling. With the help of Modelica, we are addressing many of the weaknesses inherent in traditional modelling tools, as well as the challenges of advanced modelling approaches, to feed into the next generation of modelling and simulation tools.
<urn:uuid:30c63b45-4cd5-441c-ad6f-986f98416929>
CC-MAIN-2016-26
http://www.scientific-computing.com/features/feature.php?feature_id=208
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950715
1,147
2.984375
3
3.00679
3
Strong reasoning
Science & Tech.
I’m a writer, not a programmer. But I spend much of my life surrounded by people who write code and it’s hard not to cultivate at least a passing familiarity with something that permeates your work environment. You’ll know what I mean if you’ve ever jumped into source view in your content management system, or asked a co-worker to explain that strange combination of English and…whatever…on the monitor. Feeling a bit like a stranger in a strange land, I decided to learn a little about the local customs and language: just enough to help me feel a little more at home. After completing a few introductory computer programming courses through Codecademy, three things occurred to me. - Memorizing the phrase book doesn’t make you fluent in a foreign language! - There is some real validity to the whole right brain/left brain thing. For example, trying to think like a computer physically hurts a predominantly right brain. - Aside from understanding how to program computers, writing code also offers some important lessons about understanding in general. Yes, in my never-ending search for meaning, I took the following lessons from my experimental foray into computer programming. 1. It’s not just what you say; it’s how you say it In programming words matter. A computer can only do exactly what it’s told to do. But what it’s told to do must be communicated in the right form, with the right syntax, and based on the right requirements or it will be misinterpreted. And each language is different. To be effective, you have to master the nuances of the programming language and make sure you “write it right.” You also have to be sure you understand what’s required and that you’ve accurately communicated those requirements. As an example, consider this version of a common inside joke among programmers: “The programmer’s husband asked her to bring home a dozen eggs and bacon—she brought home 12 of each.” Computers (and sometimes programmers) interpret things literally and sequentially, so how you say it really matters. In the world of human dynamics and interpersonal communication, words matter too. But so much is communicated by tone, tempo, and facial expression that the words being used are subject to misinterpretation and may go completely unheard. Just as a line of code with broken syntax, or instructions that rely on inference can break a program, how you say something often makes the difference between a successful transfer of meaning and a communication train wreck! 2. The importance of coming full circle As I cursed the noncommittal text editor on my screen, another fascinating parallel (between dealing with computers and dealing with people) struck me. Computers need closure. For everything you tell them to do, you have to tell them when to stop. Every tag you open must be closed. Every process initiated must be halted, and every byte of memory allocated released. If not, “baggage” accumulates, bugs wreak havoc and stuff simply goes off the rails. The same thing happens in the workplace when intended messages get derailed and the resulting people challenges go unresolved. So much of what makes people struggle with thier co-workers and managers is caused by loose ends: thoughts left unfinished, lack of follow-through, leaving people hanging and incomplete (or non-existent!) feedback loops. No one wants to carry around that kind of baggage. People need closure too. 3. Getting to the source is not about blame When you write code and something doesn’t go according to plan, you start debugging. The default assumption is that the error is in the code. Since the computer can only do what it’s told, there must be a problem with the instructions. Through systematic review and retesting, you identify the root cause of the problem and fix it until everything runs the way it’s supposed to. Once a problem is fixed, you may add a test or put a system in place to prevent it from happening again. In software development, finding the fault has nothing to do with assigning fault. It’s just part of the iterative process. When communication breaks down between people, it can be a little more complicated since both the transmission and interpretation of messages are prone to error. Many people focus on the other person’s inability to “get it” when things go sideways. I can’t help but think the programmer’s approach, of assuming responsibility for the miscommunication and systematically searching for its source, is a great way to tackle human miscommunication too. 4. We are not alone While some might suggest that the emergence of true artificial intelligence is just around the corner, that’s not what I’m referring to here. We are not alone because few programmers can excel alone. Writing code is a solitary activity. Writing excellent code needs multiple eyes. Just as a professional writer relies on an editor and proof-reader, good programmers appreciate a consistent code review process. We all strive to work effectively with others. And we could all benefit by a little review from time to time. If you haven’t asked someone at work to objectively review your interactions with co-workers as well as your deliverables, chances are you’re missing opportunities to polish your work and to learn in the process. No one excels alone: not computer programmer or anyone else. 5. Some things aren’t fixable Every so often, all the debugging in the world doesn’t make a program work as intended. When that happens, you have to throw the code out and start from scratch. Likewise, with relationships and workplace dynamics, occasionally something isn’t fixable. Sometimes the potential benefit of solving a problem doesn’t justify the amount of effort it will take to do so. And sometimes no solution exists given the current level of knowledge and resources available. When that happens, it’s time to move on—whatever it may involve. Whether or not I continue this coding journey, it was fascinating to discover meaningful parallels between two areas often seen as polar opposites. Here’s hoping it will help me communicate better with the programmers at work! Workplace communication is easier when it’s social (and logical!). Experience Social HCM with NetSuite TribeHR. Sign up for your free 30-day trial today. Photo Credit: Image by Stuart Miles, courtesy of freedigitalphotos.net
<urn:uuid:06397d17-b3ec-4b91-87e7-ce7a13be8392>
CC-MAIN-2017-30
https://humancapitalleague.com/5-things-writing-code-can-teach-us-about-working-with-people/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00238.warc.gz
en
0.938634
1,353
2.9375
3
2.758701
3
Strong reasoning
Software Dev.
The Privacy Protection Act of 1980 The Privacy Protection Act of 1980 ("PPA"), codified at 42 U.S.C. § 2000aa et seq., protects journalists from being required to turn over to law enforcement any work product and documentary materials, including sources, before it is disseminated to the public. Journalists who most need the protection of the PPA are those that are working on stories that are highly controversial or about criminal acts because the information gathered may also be useful for law enforcement. For instance, a criminal suspect may talk openly to a journalist who promises not to print her name, but will not go to law enforcement for fear of arrest. While law enforcement would like to obtain this type of information from a journalist, the PPA protects the journalist's freedom to publish such information under the First Amendment without government intrusion. History of the Act The PPA was the Congressional response to Zurcher v. Stanford Daily, 436 U.S. 547 (1978). That case arose when police conducted a warranted search of the Stanford Daily's newsroom seeking photos of a demonstration at which officers were injured. Staff of the Daily had attended and photographed the violent demonstration and ran a story with photographs. In response to the publication, the police went to the Daily looking for unpublished photographs which investigators could then use to identify and prosecute violent demonstrators. The search turned up no new photographs of the event other than those already published. The paper challenged the search, and a federal district court found that the search was unlawful: "[i]t should be apparent that means less drastic than a search warrant do exist for obtaining materials in possession of a third party." Therefore, in most cases, "a subpoena duces tecum is the proper -- and required -- method of obtaining material from a third party." The district court dismissed the police's argument that the First Amendment has no affect on the Fourth Amendment. The court found that the Fourth Amendment must be interpreted in light of the First Amendment and that "[t]he threat to the press's newsgathering ability . . . is much more imposing with a search warrant than with a subpoena." The Court of Appeals affirmed per curiam the District Court's finding that the search was illegal. However, the Supreme Court of the United States held that neither the First nor Fourth Amendment prohibited this search. The Court stated: Under existing law, valid warrants may be issued to search any property, whether or not occupied by a third party, at which there is probable cause to believe that fruits, instrumentalities, or evidence of a crime will be found. Nothing on the face of the Amendment suggests that a third-party search warrant should not normally issue. Two years after the Court ruled in Zurcher, Congress passed the federal PPA in order to overrule Zurcher and recognize the need of journalists to gather and disseminate the news without fear of government interference. The PPA, with some exceptions, forbids all levels of law enforcement from searching for and seizing journalists' work product and documentary materials. - The Privacy Protection Act of 1980, 42 U.S.C. § 2000aa et seq. (2002). - Zurcher v. Stanford Daily, 436 U.S. 547 (1978). - The Stanford Daily. Provisions of the Act The PPA governs when the government can conduct searches of newsrooms or a reporter's home. Specifically, the PPA states that "[n]otwithstanding any other law,representatives of the government may not search a newsroom for the purpose of obtaining work product or documentary materials relating to a criminal investigation or criminal offense, if there is reason to believe that the work product belongs to someone who will publish it in a "public communication, in or affecting interstate or foreign commerce." notes, drafts and film. Documentary material includes recorded content that may be interpreted in a finished product, such as video, audio and digital records. There are exceptions to the PPA. With respect to either work product or documentary materials, searches are permitted when the person who has the information is suspected of committing the criminal offense or "there is reason to believe that the immediate seizure of such materials is necessary to prevent the death of, or serious bodily injury to, a human being." With respect to documentary materials alone, searches are permitted when the mere notice of a "subpena [for documents] would result in the destruction, alteration, or concealment of such materials,"or when, after no response to a subpoena, the government representative has exhausted "all appellant remedies" or justice would be threatened by further delay. Although searches by government officials on both the state and federal levels are covered under the PPA, a handful of states have reiterated or strengthened protection against these searches under state law. Those states are: California, Connecticut, Illinois, Nebraska, New Jersey, Oregon, Texas, Washington and Wisconsin. Contact a lawyer. If presented with a search warrant, a person can attempt to delay the search until they can obtain a lawyer to explain the warrant. Regardless whether or not the search is delayed, a lawyer should be called immediately to discuss the options available, which may include emergency review by a judge, or the filing of a lawsuit or administrative proceeding. Record the events. During the search a person whose belongings are being searched should record the search as it takes place. Although persons present cannot interfere with a search, is not required that they aid the process either. Exercise your legal rights. The PPA permits persons served with a warrant under this Act "to submit an affidavit setting forth the basis for any contention that the materials sought are not subject to seizure." It also permits a person who has been harmed by violation of the Act to sue the government or government employee who caused the harm. Use encryption.Properly used, encryption can provide a strong shield for infromation. - Search warrant went too far; Laws to protect press ignored by legal system, Palo Alto Daily News, July 26, 2002. - Elaine Hargrove-Simon, Newspapers Under Siege: Bay Area Newspapers Searched, Silha Bulletin 7:4, 2002. - Reporter's Committee for the Freedom of the Press, The USA PATRIOT Act and Beyond, Homefront Confidential (2003). - Reporter's Committee for the Freedom of the Press, Newsroom Searches, First Amendment Handbook (1999). - Robert F. Aldrich, Privacy Protection Law in the United States, U.S. Dept. of Commerce, Nat'l Telecomm. and Info. Admin., Washington, D.C. (1982). - Beth Ann Reid, A Manual for Complying with the Freedom of Information Act and the Privacy Protection Act, Dept. of Mgmt. Analysis and Sys. Dev., Richmond, Va (1980). - 28 C.F.R. §§ 59.1-59.6 (1995) (U.S. Attorney General guidelines to federal agents on how to obtain search warrants regarding documentary materials held by disinterested third parties). - Zurcher v. Stanford Daily , 436 U.S. 547 (1978) (finding no constitutional violation to Fourth Amendment search of a newspaper's press room before existence of PPA). - Citicasters v. McCaskill, 89 F.3d 1350 (8th Cir. 1996) (stating that "the Privacy Protection Act does not require an application for a search warrant to describe any exceptions to the Act, the district court erred in imposing such requirements on the defendants in this case"). Share this page: EPIC relies on support from individual donors to pursue our work. Subscribe to the EPIC Alert The EPIC Alert is a by-monthly newsletter highlighting emerging privacy issues.
<urn:uuid:24a6636f-45e8-48d0-b77b-66c6ddbc451a>
CC-MAIN-2015-18
https://epic.org/privacy/ppa/
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658904.34/warc/CC-MAIN-20150417045738-00106-ip-10-235-10-82.ec2.internal.warc.gz
en
0.927672
1,586
2.9375
3
3.042001
3
Strong reasoning
Crime & Law
The image usually conjured up by the word robot is that of a mechanical being, more or less human in shape. Common in science fiction, robots are generally depicted as working in the service of humanity, but often escaping the control of their human masters and doing them harm. The word robot comes from the Czech writer Karel Čapek’s 1921 play ‘R.U.R.’ (which stands for “Rossum’s Universal Robots”), in which mechanical beings manufactured to be slaves for humanity rise up in rebellion and kill their creators. Thus the fictional image of robots can be dramatic and troubling, expressing the fears that people may have of a mechanized world over which they cannot maintain control. The history of real robots is rarely as dramatic, but where developments in robotics may lead remains to be seen. Robots exist today. They are used in factories in highly industrialized countries such as the United States, Germany, and Japan. Robots are also being used for scientific research, in military programs, and as educational tools, and they are being developed to aid people who have lost the use of their limbs. These devices, however, are for the most part quite different from the androids, or humanlike robots, and other robots of fiction. They rarely take human form, they perform only a limited number of set tasks, and they do not have minds of their own. In fact, it is often hard to distinguish between devices called robots and other modern automated systems (see Automation). Although the term robot did not come into use until the 20th century, the idea of mechanical beings is much older. Ancient myths and fantastic tales described walking statues and other marvels in human and animal form. From at least the 3rd century bc, craftsmen in Greece and China constructed lifelike mechanical objects, such as birds and puppets. Such mechanisms, called automatons, frequently used water or steam for motive force. Automatons were particularly popular in the Islamic world from the 9th through the 13th century and in Europe from the 16th through the 19th century. The revival of European interest in automatons followed the development of steel springs, which were quickly adopted as a power source. European church towers provide fascinating examples of clockwork figures from medieval times. By the 18th century, a number of extremely clever automatons became quite famous for a while. Swiss craftsman Pierre Jacquet-Droz, for example, built mechanical dolls that could draw a simple figure or play music on a miniature organ. Clockwork figures of this sort are rarely made any longer, but many of the “robots” built today for promotional or other purposes are still basically automatons. They may incorporate technological advances such as radio control, but for the most part they can only perform a set routine of entertaining but otherwise useless actions. Modern robots used in workplaces arose more directly from the Industrial Revolution. As factories developed, more and more machine tools were built that could perform some simple, precise routine over and over again on an assembly line. The trend toward increasing automation of production processes proceeded through the development of machines that were more versatile and needed less tending. One basic principle involved in this development was what is known as feedback, in which part of a machine’s output is used as input to the machine as well, so that it can make appropriate adjustments to changing operating conditions. The most important 20th-century development, for automation and for robots in particular, was the invention of the computer. When the transistor made tiny computers possible, they could be put in individual machine tools. Modern industrial robots arose from this linking of computer with machine. By means of a computer, a correctly designed machine tool can be programmed to perform more than one kind of task. If it is given a complex manipulator arm, its abilities can be enormously increased. The first such robot was designed by Victor Scheinman, a researcher at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology in Cambridge, Mass. It was followed in the mid-1970s by the production of so-called “programmable universal manipulators for assembly” (PUMAs) by General Motors and then by other manufacturers in the United States. The nation that has thus far exploited this new field most successfully, however, is Japan. It has done so by making robot manipulators without trying to duplicate all of the motions of which the human arm and hand are capable. The robots are also easily reprogrammed and are therefore more adaptable to changing tasks on an assembly line. At least one Japanese factory uses an assembly line of robots to make still more robots. Except for firms that were designed from the start around robots, such as several of those in Japan, industrial robots are still only slowly being placed in production lines. Most of the robots in large automobile and airplane factories are used for welding, spray-painting, and other operations where humans would require expensive ventilating systems. Similarly, robots perform many highly repetitive or dangerous jobs in die casting and electronic assembly lines. Current work on industrial robots is devoted to increasing their sensitivity to the work environment. Computer-linked television cameras serve as eyes, and pressure-sensitive “skins” are being developed for manipulator grippers. Many other kinds of sensors can also be placed on robots. Robots are also used in many ways in scientific research, particularly in the handling of radioactive or other hazardous materials. Many other highly automated systems are also often considered to be robots. These include the probes that have landed on and tested the soils of the moon, Venus, and Mars, and the pilotless planes and guided missiles of the military. Although true androids are still only a distant possibility, a number of humanoid robots have been developed. Among the more prominent creations to appear in the early 21st century were Honda Motor Company’s ASIMO, a two-legged robot that could walk smoothly and climb or descend stairs, and Sony Corporation’s SDR-4X, a “personal entertainment robot” that used sophisticated microelectronics and sensors to walk, sing, and interact with humans. At the World Expo 2005 in Aichi, Japan, scientists from Tokhuro University even unveiled a ballroom-dancing robot that was capable of reacting to the movements of its human partner. Nevertheless, a true android would have to house or be linked to the computer equivalent of a human brain. Despite some claims made for the future development of artificial intelligence, computers are likely to remain calculating machines without the ability to think or create for a long time. (See also Artificial Intelligence.) Research into developing mobile, autonomous robots is of great value. It advances robotics, aids the comparative study of mechanical and biological systems, and can be used for such purposes as devising robot aids for the handicapped. As for the “thinking” androids of the possible future, the well-known science-fiction writer Isaac Asimov laid down rules for their behavior in the early 1940s. Asimov’s first law is that a robot may not harm a human being either through action or inaction. The second is that robots must obey humans except when the commands conflict with the first law. The third is that robots must protect themselves except when this comes into conflict with the first or second law. Asimov later added a “zeroth law,” which states that robots must protect all humanity. Future androids might have their own opinions about these laws, but such matters must await their time.
<urn:uuid:e282b386-f523-41bf-81f3-0063a8dd8d7e>
CC-MAIN-2022-27
https://kids.britannica.com/students/article/robot/276749
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00470.warc.gz
en
0.965204
1,549
3.515625
4
2.887555
3
Strong reasoning
Science & Tech.
A router is just a device that determines how messages move through a computer network. All routers come with special software loaded on them, installed by the manufacturer - this is called a firmware. The firmware is similar to Android or iOS on smartphones, or Windows and OSX on desktop or laptop computers. The router firmware enables it to perform special functions: connecting to other networks, such as the Internet; assigning addresses to the people connecting to the router; setting up a firewall to protect those people using the router; running an Access Point for wireless connections; and more. There are some types of routers, typically used outdoors in special scenarios, that allow you to connect two or more routers together to form long-distance links. These are often used by Wireless Internet Service Providers - you can read more about these types of networks in Types of Wireless Networks. Not all routers can create these router-to-router connections, which can be very useful in setting up community networks. The Commotion Wireless firmware is intended to be installed on a variety of routers to allow them to create mesh router-to-router connections. Why do you need to install a special firmware? What is actually going on when you install a firmware? When you look at a router, normally you can see Ethernet ports, a power jack, LED lights, and sometimes Wi-Fi antennas. There are a lot of other things going on inside! If you opened the router, you would see a small circuit board that looks very similar to what is inside a computer (because it is a tiny computer!). Routers have Central Processing Units - a computer chip that acts as the brain - to process data coming in, decide what to do with it, and then send it on its way. There are special computer chips to handle the Ethernet connections, and radio circuitry to transmit and receive the Wi-Fi signals. Routers also have memory, to store programs and data temporarily, just as a computer does. When installing a new firmware such as Commotion Wireless, we are dealing with the router’s storage. All routers have a storage device in them, much like a computer’s hard drive. It is called “flash memory”, “flash storage”, or sometimes just “flash”. It ranges in size from 4MB (MegaBytes) to 16, 32, or even on some very powerful routers, 128MB or more. Even the 128MB of flash in very high-end routers is a small amount of space - and 4 or 8MB is tiny! An entire router operating system has to fit in that amount of storage space. Two manufacturers make wireless routers that are often used in community wireless networks: Ubiquiti and TP-Link. Why? These manufacturers use hardware that is easier to write free and open-source firmware for - there are many types: OpenWRT, DD-WRT, Tomato, and others. Both manufacturers ship their hardware with proprietary (custom and non-modifiable) firmware. TP-Link doesn’t give it a special name, but Ubiquiti calls theirs “AirOS”. When the router is built at the factory, it is also loaded with the firmware specific to each router. We can see in the two example routers below, each has a different firmware installed. When you install a new firmware on a router, you must send a firmware file to the router using a special method. The methods for sending the file are detailed in the Commotion Installation + Configuration documentation, or in the documentation of whatever firmware you are installing. Those firmware files contain all of the programs and data necessary to run the router, and are specific to each router. You can’t install a TP-Link file on a Ubiquiti, or vice versa - it won’t work. For that matter, you can’t install a firmware intended for a specific TP-Link on a different model - that won’t work either! After that firmware file is loaded on the router, it will overwrite the old files in storage with the new files. The router will then restart, and begin running the new firmware. At this point, you will have a TP-Link and Ubiquiti router running the same type of firmware - Commotion Wireless in our example. They will behave nearly the same way, as long as the routers have similar features. As mentioned at the beginning, the firmware provided on most routers can do the majority of things you need to run a small network. Manufacturers try to balance ease-of-use, the features people want, security, and price. Not every single function is included in every router to keep the price down, and keep it easy to use. If you are building a community network, you may need some more advanced features in the routers you are installing - such as quality of service (QoS), advanced firewalls, gateway sharing, or mesh routing. There are many alternative firmwares that bring such features: Commotion Wireless, LibreMesh, qMp, or others. It should be noted that all of these firmwares are built on top of OpenWRT - an open source router firmware with advanced features. It is very powerful, so other firmware makers use it as the starting point for other projects. Many people use alternative firmware because it is open source. This is the practice of making the software code available for anyone to use, copy, modify, and change for other projects. Many people take it as a philosophy as well - to share and collaborate on projects, rather than making them private and proprietary. It is too large of a subject to discuss here, read more at Wikipedia about open source software if you are interested!
<urn:uuid:fbed9a30-ad1e-4cfe-b64d-8c4ed83fb5d4>
CC-MAIN-2018-05
https://commotionwireless.net/docs/cck/installing-configuring/what-is-a-firmware/
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887660.30/warc/CC-MAIN-20180118230513-20180119010513-00281.warc.gz
en
0.940324
1,172
3.5
4
1.995749
2
Moderate reasoning
Hardware
In the pages of the Hebrew Bible we find beautiful imagery and noble typography, contours of women who have gone before us and left their mark. In the material known to us today as the Old Testament, we read of women who were prophets, military leaders, priests, wise women and wisdom personified. However, to study the lives of these women is no easy task. The reality is, the stories, as we have them are not handed down to us from the voices of the women themselves, rather what we have is an image rich narrative developed from a covenantal history, drawn upon the map of patriarchy. The narratives then, are primarily concerned with the public lives of men who are or are in some way related to the patriarchs and are connected to the emergence of the monarchy. It must also be stated that the narratives are also recorded, copied, edited and compiled by men who live many centuries after those women and men whose stories they are trying to convey. We must understand at the outset that the material we have existed first as oral tradition and communities were formed around story, many of these stories endured across the generations to be recorded during the compilation of the codices which are now considered canonical by persons of Jewish and Christian faith. To do these women any justice we must unearth information about their world, status, society and gender roles in ancient Israel. We are helped then to also consider archaeology and anthropological studies in concert with the Scriptures to gain a better picture of life in ancient Israel for women. In the Hebrew Bible, we find the stories of a people and a society who traverse the land of the Ancient Near East for more than 1,200 years (Murphy, Cullen: 1993). Of the 1,426 persons named within the narrative of the Old Testament, 111 of these named persons are women. While this seems like a small number, the witness of the lives of these women is powerful and their presence in this male dominated text reveals a prominence held by certain women. Though a casual reading of the Old Testament might leave us with the impression that women were confined to the domain of the home and their sole contribution was procreation, a closer look demonstrates another dynamic altogether. Mayer Gruber points out that women served as judges (Judges 4.4-5), officiated funerals as clergy (Jer. 9.16-19; 2 Chron. 35.25), slaughtered animals in priestly and domestic rites, served as prophetesses and sages (2 Samuel 14; 20.16-22), both nursed children and read Scripture in public settings (Gruber, Mayer: 1999). Gruber has also rightly demonstrated that within the Hebrew Scriptures we have accounts of women as priestesses (Exodus 38.8; 1 Sam. 2.22), poets (Exodus 15.21; Judges 5.1-31; Proverbs 31.1-9), musicians (Ps. 68.26), “queens, midwives; wet-nurses; babysitters; business persons; scribes; cooks; bakers; producers of cosmetics (I Sam. 8.13 ) as well as innkeepers and prostitutes (Josh.2).” While the scope of this study will not allow us to consider the 111 named women of the Hebrew Bible, we will take a representative group and trace their lives, their communal impact and their covenantal significance. We do this in effort to illuminate the reality that though the narrative of Hebrew Bible is primarily concerned with the lives of the patriarchs, there exists also a counter narrative that demonstrates the activity of God present and powerful in the lives of many women which reverberates through the nation of Israel for the good of the world. The group we will consider here is the women of the genealogy of Jesus offered in Matthew’s Gospel as each of these women emerge from the story of ancient Israel and the tradition of their contributions endure into the New Testament Canon and beyond. The narratives of these women, Tamar, Rahab, Ruth, and Bathsheba offer us traditions of women who were significant in the life of ancient Israel if also representatives of life in a given place and time who simultaneously rise from the narrative to demonstrate women as agents of God’s covenantal and universal work. Want to Read More...get your copy today!
<urn:uuid:97fae853-ae6e-4918-b99c-2923b3b2d789>
CC-MAIN-2015-35
http://www.kimberlymajeski.com/home/category/theology
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645356369.70/warc/CC-MAIN-20150827031556-00118-ip-10-171-96-226.ec2.internal.warc.gz
en
0.965152
876
3.359375
3
2.953039
3
Strong reasoning
Religion
Psoriasis is a common skin disorder, affecting about 7.5 million people in the United States. It causes skin cells to multiply rapidly and to accumulate on the surface of the skin. These extra skin cells create thick, scaly patches called plaques. Plaques most often develop on the: - lower back - palms of the hands - soles of the feet The affected areas of skin typically appear reddened and contain dry, itchy scales. They may also be more sensitive and cause a burning or painful sensation on the skin. If you have psoriasis, you’re probably familiar with these uncomfortable symptoms. You may also know that psoriasis is a chronic condition that can be managed with treatment, but not cured. But you may not be sure why your disorder developed in the first place or why your symptoms come and go. While the specific causes of psoriasis aren’t completely understood, learning about the possible triggers for symptoms can prevent future flare-ups and improve your quality of life. What causes psoriasis? The exact cause of psoriasis isn’t known. Some medical researchers have theories about why people develop psoriasis. According to the National Psoriasis Foundation, an estimated 10 percent of Americans inherit genes that increase their likelihood of getting psoriasis. Of those 10 percent, however, only about 2 to 3 percent actually develop psoriasis. Scientists have identified about 25 gene variants that can increase risk for psoriasis. These genetic variants are believed to cause changes in the way the body’s T cells behave. T cells are immune system cells that normally fight off harmful invaders, such as viruses and bacteria. In people with psoriasis, however, T cells also attack healthy skin cells by mistake. This immune system response results in a range of reactions, including: - enlargement of blood vessels in the skin - increase in white blood cells that stimulate the skin to produce new cells more quickly than usual - increase in skin cells, T cells, and additional immune system cells - accumulation of new skin cells on the surface of the skin - development of the thick, scaly patches associated with psoriasis Typically, these effects occur in response to a trigger. What triggers psoriasis? The symptoms of psoriasis often develop or become worse due to certain triggers. These can be environmentally or physically related. The triggers vary from person to person, but common psoriasis triggers include: - cold temperatures - drinking too much alcohol - having another autoimmune disorder, such as AIDS or rheumatoid arthritis - infections that cause a weakened immune system, such as strep throat - a skin injury, such as a cut, bug bite, or sunburn - excessive stress and tension - certain medications, including lithium, beta blockers, and anti-malarial drugs You can identify your specific triggers by tracking when you experience psoriasis symptoms. For example, did you notice a flare-up after a stressful week at work? Did your symptoms become worse after having a beer with friends? Staying vigilant about when symptoms occur can help you determine potential psoriasis triggers. Your doctor can also evaluate your medications and overall health to help you pinpoint possible triggers. Make sure to tell your doctor about any prescription or over-the-counter medications you may be taking. Your doctor may switch you to another medication or make a change in your dosage if they suspect your medication is causing your outbreaks. You shouldn’t stop taking any medications unless your doctor instructs you to do so. How can psoriasis flare-ups be prevented? While you can’t change your genes, you can prevent psoriasis flare-ups by controlling your symptoms through regular treatments. These include applying topical medications, taking oral medications, or receiving injections to reduce uncomfortable psoriasis symptoms. Phototherapy or light treatment can also reduce the incidence of psoriasis. This type of treatment involves using natural or artificial ultraviolet light to slow skin growth and inflammation. Aside from medical treatments, making certain lifestyle adjustments can also reduce your risk for a psoriasis flare-up. These include: While stress can have a negative impact on anyone, it’s particularly problematic for people with psoriasis. The body tends to have an inflammatory reaction to stress. This response can lead to the onset of psoriasis symptoms. You can try reducing the amount of stress in your life by doing yoga, meditating, or meeting with a therapist on a regular basis. Taking care of your skin Injuries to the skin, such as sunburns and scrapes, can trigger psoriasis in some people. These types of injuries can usually be prevented by practicing good skin care. When doing activities that may cause skin injury, you should always take extra precautions. Use sunscreen and wear a hat when spending time outside. You should also use caution when engaging in outdoor activities and contact sports, such as basketball or football. Practicing good hygiene Infections are known to trigger psoriasis because they put stress on the immune system, causing an inflammatory reaction. Strep throat in particular is associated with the onset of psoriasis symptoms, especially in children. However, psoriasis flare-ups may occur after an earache, tonsillitis, or a respiratory or skin infection. These types of infections can usually be prevented with good hygiene practices. Make sure to wash your hands often throughout the day. Also avoid sharing cups and eating utensils with other people. It’s also important to clean a cut or wound properly and to keep it covered so it doesn’t get infected. Eating a healthy diet Being obese or overweight appears to make psoriasis symptoms worse. So it’s important to manage your weight by exercising regularly and eating a healthful diet. If you have trouble eating healthy, you may want to see a nutritionist for help. They can help you figure out how much food and which foods you should eat every day to lose weight. Though psoriasis can’t be cured, it can be controlled. Working with your doctor to find treatments that relieve the itching and discomfort can ease psoriasis symptoms. Taking steps to identify triggers for your symptoms and limiting your exposure to these triggers can also help prevent future flare-ups.
<urn:uuid:b0316956-b080-4b69-b43c-2d7965019275>
CC-MAIN-2018-09
http://idolreplicas.info/health/psoriasis/causes
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817999.51/warc/CC-MAIN-20180226025358-20180226045358-00726.warc.gz
en
0.939135
1,307
3.578125
4
1.690463
2
Moderate reasoning
Health
ORIGINS OF SCARECROW Jonathan Crane aka The Scarecrow first appeared in World’s Finest Comics #3 (1941). A psychology teacher who was shunned by his peers, dressed poorly and became obsessed with fear. Scarecrows first story had no fear toxin or elaborate plots. He simply dressed up as a Scarecrow to scare people for money. If his efforts to scare people for money failed, he resorted to violence and shot his victims with a gun. In later stories, the Scarecrow’s backstory was fleshed out giving him a tormented childhood, and later developing his Fear compounds through chemistry. Drugs that whether in liquid,solid or vapor form would induce panic, fear and terror in his victims. His experiements on prison inmates lead him to further research and later experimentation upon the population of Gotham City. THE TOXIN OF FEAR CHEMICALS Jonathan Crane is a psychologist who uses fear as a weapon, making him a unique and intriguing character that offers insight into the nature of fear. The Scarecrow’s greatest weapon is his mastery of the psychology of fear. He uses this knowledge to create fear-inducing chemicals and gadgets that he uses to manipulate and terrorize his victims. The Scarecrow character highlights the power of fear and how it can be used as a tool for control and domination. It also demonstrates how fear can be used to manipulate people and exploit their weaknesses, even in seemingly rational and educated individuals. Scarecrow specializes in creating personalized living nightmares. His drug of choice? A fear toxin – that once consumed or inhaled induces an individual into a highly paranoid psychotic state where they experience their very worst fears and phobias as overwhelmingly powerful hallucinations more real than anything they could have ever imagined, true living nightmares… hell on earth. The Scarecrows FEAR Toxin gives unholy life to mankinds worst imaginings, bringing forth deep dark mental contructs from the creaky crevices of ones mind, out into hallucinatory realer than real – seemingly three dimensional physical reality. Completely overwhelming all the bodies senses and natural biological processes. The mind constructs or thought beings then are the result of the pysco-active properties of Scarecrows Fear Toxin at the physical level, bringing agonizing life to that which is purely of the mental abstract and deep unconcious realm of the sub-psyche, dreams, impressions and memories. Personal and Primal fears are dragged out of the dark dungeon in our mind, kicking and screaming into apparent reality. Our bodies normal survival oriented fear response scales up or down in response to dangerous situations, physical threats and other fear triggers. The scarecrows fear toxins however drive our bodies natural responses to fear at the biological level into overdrive allowing no cooling off period to return to homestasis. THE BIOLOGY OF FEAR Fear exists in human beings (and other animals that have a limbic system) as a survival oriented mechanism. If we were unable to experience fear, our chance of death from predators and dangerous situations increases, and we likely would be extinct as a species without the capacity for fear. The Scarecrow also represents the dark side of fear. While fear can be a natural and necessary survival mechanism, it can also become a destructive force that consumes people and causes them to make irrational decisions. The Scarecrow embodies this aspect of fear, showing how it can be used to spread terror and cause harm. How does fear work? Our body perceives stimulus in our environment through usually a combination of our senses such as sight, smell, touch etc. For fear to activate the Amygdala part of our brain signals the Hypothalamus, which activates our pituitary gland. The pituitary gland is where our nervous system meets our endocrine or ‘hormone’ system. Through these systems, our body is able to activate the fight or flight response. During a heightened fear response our blood pressure, breathing and heart rate increases, blood is driven to the limbs in order to take action by fighting or fleeing. Adrenaline is dumped making us temporarily stronger and our senses are hyper aware to everything in our immediate environment, increasing arousal while narrowing our overall stimulus according to our biologies hierarchical systems. THE MANY FACES OF FEAR While fear toxins are the stuff of comic book fiction.. there are no shortage of real world toxins and altered states with truly horrifying effects. Take the infamous mysterious outbreak of mad bird behavior in 1961 that would go on to inspire a short story and was the likely inspiration for Alfred Hitchcock’s terror inducing film The Birds. Domoic acid can cause confusion, disorientation, scratching, seizures and death in birds that eat the stuff, which gets concentrated as it moves up the food chain – Wynne Parry A toxin(in this case an algae) got into the birds systems and drove them nuts with confusion and disorientation, which appears to humans at least, to be rabbid attack behavior. The Scarecrow uses his fear toxins to induce fantastical schizophrenic like responses and vivid hallucinations in his intended victims. Whether they die of pure terror, attack their own loved ones or endure the episode is a gamble. The real world has no shortage of fear and hallucinatory induced confusion, slaughter and carnage. Let’s take a look at another example. A Canadian man who was found not criminally responsible for beheading and cannibalising a fellow passenger on a Greyhound bus has been granted freedom from all supervision – The Guardian (Australian Edition) The man on the bus believed God was speaking to him directly, and was found (known) to be mentally ill. His personal reality and falsified perceptions caused him to act in an un-characteristic manner, decapitating and hacking away and trying to EAT a fellow passenger. When later interviewed and on medication for his condition, the man who formerly thought God was talking to him came to know it was not true and that his mental illness brought on episodes of “…hearing voices or having delusions. You don’t know what is real”. There are many different types of altered states human beings can experience. Some causes include sensory deprivation, extreme tiredness or hunger, synthetic drugs, mental illness and more. Scarecrows fear toxins are designed to work on multiple levels, driving our bodies normal natural biological fear process into overload. Like red lining a car, extreme prolonged fear creates real physical damage in a human body, starting at the cellular and chemical level, as well as causing the victim to act in a paniced highly irrational state due to the false data fed to their own senses. FEAR AND MADNESS We play with the idea that maybe we know our own fears of have conquered our fears, yet true primal fear is not something you can play with or casually entertain. It is a deep instense physical and psychological reaction to what is in our genes, our very biology own and what is happening directly in our immediate environment. Primal Fear unchecked can lead us from ordinary states of fear into becoming a gibbering mass of quivering hysterical pure terror. The Scarecrow’s personal playground and narrow obsession is driving a person to that very place. To the truly hysterical panic inducing, pants wetting fear that may even kill someone from stress or a heart attack. It may cause that individual to attack their friends and family or just leave them as helpless as a newborn baby, gasping for oxygen and raving like a lunatic. While there are many types of fear and fear states, the Scarecrow continues to evolve his chemical cocktails, finding new chemical strains, new ways to instill phobias, psychological delusions, existential angst and more in his efforts to terroize the populace of Gotham City. SOMETHING WICKED THIS WAY COMES The Scarecrow is a complex character that offers a nuanced perspective on the nature of fear. Through his actions and motivations, he highlights the power of fear, both as a tool and as a source of terror. He also demonstrates how fear can be both empowering and debilitating, and how it can be used to control and manipulate people. In this way, the Scarecrow serves as a reminder of the need to understand and manage fear, both in oneself and in society as a whole. Scarecrow God of Fear image by SmilexVillainco The Birds movie still image and article quote from: Guardian quote on mentally ill man on bus: Hell on Earth image from Yannick Bouchard Uma Oswald painting: Psychotic Waltz image from music video: Gygaxian Mouther image: Arkham Knight Scarecrow by Kla-Jezebeth on Deviantart Syringe Needle fingers Scarecrow by Hessianforhire on Deviantart Scarecrow and birds in field by M-hugo on deviantart Deranged scarecrow (bald) with single syringe by Austin Mengler Austin Mengler deviant art Human Nervous System Ref Image Anatomy of the Brain ref image from “The Nervous System and How it Works” at The Apprentice Doctor website (full article link below)
<urn:uuid:6d851d2a-6c81-4ef8-bfa7-e06997ebf0a7>
CC-MAIN-2023-50
http://infernalbatcave.com/batmans-scarecrow-as-mythic-archetype-the-avatar-of-fear/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100677.45/warc/CC-MAIN-20231207153748-20231207183748-00290.warc.gz
en
0.942414
1,906
2.625
3
2.913029
3
Strong reasoning
Health
He worked anxiously, composing symphonies and operas, as well as touring constantly. Some of the different positions may seem to overlap or combine duties, resulting in a confusing hodgepodge of responsibilities if you dwell overlong on what Escoffier had in mind. Basically, once an order is received by the sous chef, the order is called out to the various departments. The entremetier entrée preparer may also have a team of commis or demi-chefs under their jurisdiction as can the poissonnier fish cook or any larger station which needs more than one chef at the station. Confiseur in larger restaurants, prepares candies and instead of the pâtissier. All modern professional kitchens run according to a strict hierarchy, with the French Brigade system used in order to ensure the whole operation runs as smoothly as possible. Until after the French Revolution and the subsequent rise of restaurants, this caste of cooks continued to work exclusively for the aristocracy. Not all restaurants have a separate executive chef and chef de cuisine, defined below and an executive chef may spend much of his or her time cooking, instead of involved in administrative duties. The preparation of the dishes is very accurate thanks to the ingenuity and experience of the Chef and his kitchen brigade , and only the freshest seasonal and regional ingredients are used. The modern brigade system enables different individuals to fulfill different culinary roles in the kitchen, such as making dessert or overseeing all of the other chefs. May prepare these and then give them to the garde manger for distribution to the various station chefs. Two or more brigades may constitute a. The kitchens, and the dishes served, were characterized by excess, disorganization, inefficiency, and even chaos. The commis, or apprentice, works under a station chef to learn how the station operates and its responsibilities. Still, some of the positions will be combined. Communard prepares the meal served to the restaurant staff. Tasks include basic food preparation such as washing salad and peeling potatoes, in addition to basic cleaning duties. Apprenti e apprentice are often students gaining theoretical and practical training in school and work experience in the kitchen. Supervises and coordinates the various station chefs chef de parties. The pastry chef frequently supervises a separate kitchen area or a separate shop in larger operations. The traditional system of kitchen structure -- the brigade led by the chef -- has venerable roots in European military organizations. By the 1820s, chefs were wearing uniforms purportedly based on those worn by soldiers in the Turkish army. Mozart began composing at an early age, and he began touring around the same time. This structured team system delegates responsibilities to different individuals who specialize in certain tasks in the kitchen. Classical Conservatism In today's society, most people are unable to explain the differences between the Democratic Party and the Republican Party. It also appeared they had a few Commis apprentices and kitchen assistants. In some armies, the commander is rated as a General Officer. The Culinary Institute of America The roundsman tournant or swing cook works as needed throughout the kitchen. The modern kitchen is more scientific and requires more specific skill sets than classical kitchens. A typical brigade may consist of approximately 5,500 personnel between two mechanised infantry battalions, an armoured regiment, an armored artillery regiment, and other logistic and engineering units. Some brigades are classified as independent or separate and operate independently from the traditional division structure. And with grand sized hotels, they needed grand sized kitchens to feed all the patrons. In this case, as a democratic practice would sprout from the top down - first become involved in the negotiations advisers of the king, the most powerful feudal lords and members of the church, and then members of the nobility and the wealthiest citizens, and only after the system of democratic institutions more or less fully formed in relations between elite groups, it spreads to the entire population - as a rule, first on the men's part of it, and then the female. Install Hand Shower To Bathtub. Aworker with no supervisory responsibilities is called a cook. Some brigades may also have a deputy commander. If you just saw the photographs from the restaurant could you identify what each person does in their brigade? The structure is capable of being adapted to whatever size staff available and is still used to this day. He or she chooses which wines to purchase, including considerations of what wines will pair well with menu items; prepares the wine list, and helps guest choose a wine for their dinner. Unlike the Swedish brigades, French brigades at that time were composed of two to five regiments of the same branch brigade de cavalerie, brigade d'infanterie etc. I could also guess they had a Sous Chef, Chef Garde Manger pantry chef, in charge of cold salads, appetizers, etc , A Chef Poissonier Fish Chef , A Chef Grillardin Grilled foods and possibly a Chef Friturier Fried items. Well, then you know something of the brigade system already. Solar System and remodeled pool! White eventually became the standard to emphasize cleanliness and good. Executive chefs may oversee more than one restaurant kitchen, as when there are several restaurants in a hotel or resort. Oversees the preparation, portioning, and presentation fo the menu items according to the standards of the executive chef or chef de cuisine. Home Depot Bathtubs Cast Iron. Contemporary fireplaces grow to the ambience of both the perky and associates rooms. Only the largest establishments have an executive chef, and it is primarily a management role; executive chefs are often responsible for the operation of multiple outlets, and thus they do very little actual cooking! If you need more ideas to Design a Home , you can check at our collection right below this post. In the 19th century, French chef Georges-Auguste Escoffier developed the kitchen brigade system that is still used around the world to this day. Army, , and the Army.
<urn:uuid:714fd57a-8430-47f0-a5cd-033e8155a19d>
CC-MAIN-2019-18
http://fontidelvulture.it/modern-kitchen-brigade-definition.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578759182.92/warc/CC-MAIN-20190426033614-20190426055614-00283.warc.gz
en
0.96259
1,222
2.84375
3
2.575191
3
Strong reasoning
Food & Dining
Sometimes you have to deal with binary response variables. In this case, several OLS hypotheses fail and you have to rely on Logit and Probit. Good afternoon Guys, I hope you are having a restful Sunday! Today we will broadly discuss what you must know when you deal with binary response variable. Even though I don’t want to provide you a theoretical explanation I need to highlight this point. OLS is known as a Linear Probability Model but, when it comes to binary response variable, it is not the best fit. Moreover, there are several problems when using the familiar linear regression line, which we can understand graphically. As we can see, there are several problems with this approach. First, the regression line may lead to predictions outside the range of zero and one. Second, the functional form assumes the first observation of the explanatory variable has the same marginal effect on the dichotomous variable as the tenth, which is probably not appropriate. Third, a residuals plot would quickly reveal heteroskedasticity and a normality test would reveal absence of normality. Logit and Probit models solve each of these problems by fitting a nonlinear function to the data and are the best fit to model dichotomous dependent variable (e.g. yes/no, agree/disagree, like/dislike). The choice of Probit versus Logit depends largely on your preferences. Logit and Probit differ in how they define f(). The logit model uses something called the cumulative distribution function of the logistic distribution. The probit model uses something called the cumulative distribution function of the standard normal distribution to define f (). Both functions will take any number and rescale it to fall between 0 and 1. Hence, whatever α + βx equals; it can be transformed by the function to yield a predicted probability. If you are replicating a study, I suggest you to look through the literature on the topic and choose the most used model. Enough Theory for today! In both model you can decide to include factor variables (i.e. Categorical ones) as a series of indicator variables by using i. Ready to start? Let’s take our friendly dataset, auto.dta. Don’t you remember the command? I introduced it there but we can revise it now: sysuse auto Probit and Logit Remember that Probit regression uses maximum likelihood estimation, which is an iterative procedure. In order to estimate a Probit model we must, of course, use the probit command. Nothing new under the sun. probit foreign weight mpg i.rep78 The above output is made by several element we never saw before, so we need to familiarize with them. The first one is the iteration log that indicates how quickly the model converges. The first iteration (called Iteration 0) is the log likelihood of the “null” or “empty” model; that is, a model with no predictors. At each iteration, the log likelihood increases because the goal is to maximize the log likelihood. When the difference between successive iterations is very small, the model is said to have “converged” and the iterating stops. On the top right part, we can find the Likelihood Ratio Chi-Square Test (LR chi2) and its p-value. The number in the parentheses indicates the degrees of freedom of the distribution. Instead of R-squared we find the McFadden’s Pseudo R-Squared but this statistic is different from R-Squared and also its interpretation for the Probit model differs. The Probit regression coefficients give the change in the z-score for a one unit change in the predictor. I added a factor variable who was mainly dropped due to multicollinearity. As we already discussed in the post related to OLS regressions, there are several options available for this command like: vce(robust), noconstant, ect. If you missed the post, review it here. After having performed the regression, we can proceed with post estimation results. We can test for an overall effect of rep78 using the test command. Below we see that the overall effect of rep78 is statistically insignificant. test 3.rep78 4.rep78 // Joint probability that coefficients are all equal to zero We can also test additional hypotheses about the differences in the coefficients for different levels of rep78. Below we test that the coefficient for rep78=3 is equal to the coefficient for rep78=4. test 3.rep78 = 4.rep78 Another thing we can decide to do is to save the probit coefficients to a local macro: local coef_probit = _b[x] Then predict the probabilities from the probit after having called the regression command. predict is the command that gives you the predicted probabilities that the car type is foreign, separately for each observation in the sample, given the type’s regressor. label var y_hat “Probit fitted values” if you still believe there is a difference between computing a logit or a probit, then you have to try to regress this: reg y_hatprobit y_hatlogit Look at the R-Squared. The models are practically equals. Feel free to switch between probit and logit whenever you want. The choice should not generally significantly affect your estimates. logit foreign weight mpg i.rep78 There is almost no difference among logistic and logit models. The only thing that differs is that –logistic- directly reports coefficients in terms of odd ratio whereas if you want to obtain them from a logit model, you must add the or option. We can think about logit as a special case of the logistic equation. They both support the by() and if options and several others we have already reviewed. When you use these models, you have to be careful in the interpretation of estimated coefficients. Indeed, you cannot just look at them and say that when weight increase by 1 the probability to have a foreign car decreases by an amount. If you want to declare such a thing, you must compute the fitted probabilities for specific values of the regressors. Moreover, you are not considering interaction terms in the model that might diminish or increase the effect of the weight covariate. Thus, it is important to find a way to compute the predicted probabilities for different possible individuals and to compare how those change when the value of a regressor changes. These are important features you have to take care of when dealing with dichotomous variables and related models. In order to explore coefficients’ interpretation, I am going to use an online database to explain you several differences among the margins, inteff and adjust commands. I am not going to discuss the mfx command because it was replaced by margins. If you still use this old command, please update your information by reading below. Indeed, several students and professionals of Stata are reluctant to use this command due to its syntax but it is extremely useful to investigate marginal effects and adjusted predictions. Let’s open a survey dataset oh health: We want to study how the probability of become diabetic depends from, race and sex. I want to study if the effect of age is different depending of the sex so I am going to create the intersection variable: Then, I estimate a logistic regression, at first without this intersection: logit diabetes black female age, nolog As we can observe, results show that getting older is bad for health but it seems to be unrelated with gender. The problem here is that we are not able to fully understand how bad it is to be old. That’s why we need to compute adjusted predictions that specify values for each regressor in the model and then compute the probability of the event occurring for an individual with those values. For, example we want to check which is the probability that an “average” 35 year old will have diabetes and compare it to the probability that an “average” 70 year old will. We need to type: adjust age=35 black female, pr adjust age=70 black female, pr As we can observe, a 35 year old has less than 2 percent chance of having diabetes whereas a 70 year old has an 11 percent change. I used the expression “average” to indicate that I took mean values for the other independent variables in the model (female, black). However, this is an old command that is still on usage on Stata but it may be replaced easily by margins. I should have obtained the same result by using the margin command. The –margin– command calculates predicted probabilities that are extremely useful to understand the model and was introduced in Stata 11. Margin looks at the discrete difference in probability between old and young for the different gender and race. Indeed, if we type: margins, at(age=(35 70)) atmeans vsquish If we want to investigate the predicted probability of having diabetes at each age, holding all the other covariates at their means I could have also typed: margins age, atmeans There are several useful options of margin. The first one you can use is post that allows the results to be used with post-estimation command like test. Another one is predict, which allows you to obtain probabilities or linear predictions (i.e. predict(xb)). If we want to get the discrete difference in probability, we can use the dydx() option with the binary prediction. This option requests margins to report derivatives of the response with respect to the variable specified. Eyex() reports derivatives as elasticity. It finds the average partial effect of the explanatory variable on the probability observing a 1 in the dependent variable. This is taking the partial effects estimated by the logit for each observation then taking the average across all observations. This is extremely useful because the direct results of the logit estimations can not be directly interpreted as partial effects without a transformation. If you have interaction effects in your model, you will need to specify your regressors using a particular notation Stata recognizes to be used to compute marginal effects: - Creates dummy variable (as already mentioned here) // tells Stata variables are categorical (i.e. factor var) - specifies continuous variables # and ## specify interaction terms Now we are going to study how margins deals with factor variables but first, let’s explain why adjust fails to compute them. If we type: adjust age = 70 black female age2, pr We can immediately notice that adjust reports a much higher predicted probability for an average individual of 70 years but it fails to control for the relationship between age and age2. Indeed, it uses the mean value of age2 in its calculations rather than the correct value of 70 squared. If we use margins with factor variables, the command recognized age and age2 to be not independent to each other and calculates accordingly. In fact: logit diabetes i.black i.female age c.age#c.age, nolog margins, at(age=70) atmeans The smaller and correct estimate is not of 10.3 percent. Try to believe it. By doing this, Stata knows that if age=79 then age2=4900 and it hence computes the predicted values correctly. What about interaction terms? if we want to study the interaction between age and female we have to regress: logit diabetes black female age femage, nolog If we use adjust, we get wrong estimates because adjust does not recognize the relationship between female and femage thus it cannot understand that, if female=0, also femage will equal zero and it uses the average value instead. If you specify that your model has factor variables, margins recognizes that the different components of the interaction term are related and it computes correctly: logit diabetes i.black i.female age i.female#c.age, nolog margins female, atmeans grand A partial or marginal effect measures the effect on the conditional mean of y of a change in one of the regressors. In the OLS it equals the slope coefficients. This is no longer the case in nonlinear models. Marginal effects for categorical variables shows how the probability of y=1 changes as the categorical variable changes from 0 to 1, after controlling for the other variables in the model. With a dichotomous independent variable like diabetes, the ME is the difference in the adjusted predictions for the two groups (diabetics & non-diabetics). Let’s see how to compute them: logit diabetes i.black i.female age, nolog margins, dydx(black female) atmeans This result tells us that, if you have two otherwise-average individuals, one white and the other black, the black’s probability of having diabetes would be 2.9 percentage points higher. It can also be helpful to use graphs of predicted probabilities to understand and/or present the model. If we want to graph marginal effects for different ages in black people, we see that the effect of black differs greatly by age. How can we do that? Luckily, with the marginsplot command introduced in Stata 12. margins, dydx(black female) at(age= (20 30 40 50 60 70)) vsquish The noci option tells Stata to not display the confidence intervals of the estimates. Other options related to graphs’ editor are available at my previous post on graphs and can be all used on marginsplot. Let’s complicate our results a bit (Otherwise where is the fun?) by asking Stata to compute marginal effects for the intersection among race and gender. We must type: logit diabetes i.black i.female age i.female#c.age, nolog margins female#black, at(age=(20 30 40 50 60 70)) vsquish marginsplot, noci title(Adjusted Predictions of Interception) There is another package to be installed in Stata that allows you to compute interaction effects, z-statistics and standard errors in nonlinear models like probit and logit models. The command is designed to be run immediately after fitting a logit or probit model and it is tricky because it has an order you must respect if you want it to work: inteff depvar indepvar1 indepvar2 interaction_var1var2. Indeed, it cannot contain less than four variables and the interacted variables cannot have higher order terms, such as squared terms. If the interaction term (at the fourth position) is a product of a continuous variable and a dummy variable, the first independent variable x1 has to be the continuous variable, and the second independent variable x2 has to be the dummy variable. The order of the second and third variables does not matter if both are continuous or both are dummy variables. Let’s make a practical example with two continuous variables interacted using a dataset named lbw2 but known as Hosmer & Lemeshow. probit low age lwt age_lwt inteff low age lwt age_lwt, savedata(C:\User\Michela\Stata\logit_inteff,replace) savegraph1(C:\User\Michela\Stata\figure1, replace) savegraph2(C:\User\Michela\Stata\figure2, replace) As you can see from the selected options you can choose to save both graphs produced and save the output. The saved file includes four variables that are the predicted probability, the interaction effect, the standard error and z-statistics of the interaction effect. You can also decide to save two scatter graphs that both predicted probabilities on the x-axis. The first graph plots two interaction effects against predicted probabilities, whereas the second one plots z-statistics of the interaction affect against predicted probabilities. We may also wish to see measures of how well our model fits. This can be particularly useful when comparing competing models. The user-written command fitstat produces a variety of fit statistics as a post-estimation command but, if you want to use it, you need to install it first. In our example, this is what we get if we type the command after the probit regression. Here you have all you need to evaluate your model, starting from the AIC and BIC criteria. You don’t know what to do with them? I suggest you to check a good econometric book such as Wooldridge or Hamilton. I think for now it is time to stop here. I leave to another time the study on how to create dynamic and multinomial logit and probit!
<urn:uuid:47e96c6d-f351-4b06-8b26-243057914940>
CC-MAIN-2018-43
http://econometricstutorial.com/2015/03/logit-probit-binary-dependent-variable-model-stata/
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00081.warc.gz
en
0.914356
3,494
2.515625
3
2.825198
3
Strong reasoning
Science & Tech.
Olaf the Black (DNB00) OLAF (1177?–1238), called the Black, king of the Isles, was the son of Godred, king of the Isles, and of Fingola, granddaughter of Muircheartach (d. 1166), king of Ireland [see O'Lochlainn, Muir]. His parents had been united in religious marriage through the intervention of Cardinal Vivian, papal legate, in 1176 (Chron. Regum Manniæ et Insularum, ed. Munch, i. 76, Manx Soc.) Olaf's father died in 1187, and though he had bequeathed his dominions to his legitimate son Olaf, the latter, being a child, was set aside in favour of his half-brother Reginald. Some years later Reginald assigned to Olaf the miserable patrimony of the island of Lewis in the Hebrides, where he dwelt for some time. Growing discontented with his lot, he applied to Reginald for a larger share of his rightful inheritance. This was refused, and about 1208 Reginald handed Olaf over to the custody of William the Lion of Scotland, who kept him in prison until his own death in 1214. On the accession of Alexander II Olaf was released, and returned to Man, whence he shortly set out with a considerable following of men of rank for Spain, on a pilgrimage to the shrine of St. James at Compostella. On his return, Reginald, who was apparently reconciled to him, caused him to marry his own wife's sister, the daughter of a noble of Cantyre, and again assigned to him Lewis for his maintenance (ib. pp. 82-4). Olaf accepted the gift, and departed to Lewis. Soon after his arrival there, Reginald (?), bishop of the Isles, visited the churches, and canonically separated Olaf and his wife as being within the prohibited degrees of relationship, whereupon Olaf married Christina, daughter of Ferquhard, earl of Ross. Aroused to anger, Reginald's queen, the sister of Olaf's divorced wife, called upon her son Godred to avenge the wrong done to her house. The latter collected a force and sailed for Lewis, but Olaf escaped to his father-in-law, the Earl of Ross, abandoning Lewis to Godred. Olaf was shortly joined by Paul Balkason, the leading chieftain of Skye, who had refused to join in the attack on Lewis. Entering into alliance, the two chieftains in 1223 successfully carried out a night attack upon the little island of St. Colm, where Godred was. The latter was taken and blinded, it is said, without Olaf's consent (ib. pp. 86-8; cf. Ann. Regii Islandorum, ap. Langerek, Scriptt. Rer. Dan. iii. 84). Next summer Olaf, who had won over the chiefs of the isles, came to Man to claim once more a portion of his inheritance. Reginald was forced to agree to a compromise by which he retained Man, with the title of king, while Olaf was to have the isles—namely, the Sudreys. The peace was of short duration, for in 1225 Reginald, supported hy Alan, lord of Galloway, attempted to win back the isles. The Manxmen, however, refused to fight against Olaf and the men of the isles, and the attempt failed. Shortly after Reginald, under pretext of a visit to his suzerain, Henry III of England, extorted one hundred marks from his subjects, wherewith he went to the court of Alan of Galloway and contracted a highly unpopular alliance between his daughter and Alan's son. The Manxmen rose in revolt, and called Olaf to the kinship. Thus, in 1226, the latter obtained his inheritance of Man and the Isles, and reigned in peace two years (ib. p. 90). That Olaf did, however, possess both the title of king and considerable influence before this date, would seem probable if two extant documents are rightly held to relate to him. The former of these shows him to have been at issue with the monks of Furness in Lancashire with regard to the election of their abbot, Nicholas of Meaux [q. v.], to the bishopric of the isles (Dugdale, Monasticon Anglicanum, viii. 1186). The second, dated 1217, is from Henry III of England to Olaf, king of Man, threatening vengeance should he do further injury to the abbey of Furness (Oliver, Monumenta de Insula Manniæ, ii. 42, Manx Soc.) In 1228 an attempt was made at negotiation for the settlement of the differences between Olaf and Reginald. Letters of safe-conduct to England were granted by Henry III to Olaf for the purpose (Rymer, Fœdera, i. 303). The attempt, however, seems to have failed, for about 1229, while Olaf was absent in the isles, King Reginald took the opportunity to attack Man in alliance with Alan, lord of Galloway. Olaf, on his return, drove them out, but during the winter of the same year Reginald made another attempt. Olaf, who appears to have exercised great personal influence over his men, met and defeated him at Dingwall in Orkney. Here Reginald was slain on 14 Feb. 1230 (Annals of England, i. 148; cf. Chron. Manniæ, i. 92; Ann. Regii Islandorum, ap. Langebek, Scriptt. Rerum Danicarum, iii. 88). Soon after this event Olaf set out to the court of his suzerain, the king of Norway; for in spite of Reginald's formal surrender of the Kingdom to the pope and king of England in 1219, Olaf had remained faithful to Hakon V of Norway (Annals of England, i. 147; Flateyan MS. ap. Oliver, Monumenta, i. 43). Before Olaf's arrival in Norway, however, Hakon had appointed a noble of royal race named Ospac to the kingship of the Isles, and in his train Olaf and Godred Don, Reginald's son, were obliged to return. After varied adventures in the western islands of Scotland (ib. i. 43 seq.), Ospac was killed in Bute, and Olaf was chosen as the new leader of the expedition, which was next directed against Man. The Manxmen who had assembled to resist the Norwegians, again, it is said, refused to fight against Olaf, and he and Godred Don divided the kingdom between them. Shortly after Godred was slain in Lewis, and Olaf henceforth ruled alone. In 1235 Olaf appears to have been in England on a visit to Henry III, who granted him letters of safe-conduct and of security to his dominions during his absence (Rymer, Fœdera, i. 303). It was possibly during this visit that Henry committed to him the guardianship of the coasts both of England and Ireland towards the Isle of Man, for which service he was to receive one hundred marks yearly and certain quantities of corn and wine (ib. p. 341). In accepting this duty Olaf apparently renounced his allegiance to Hakon V of Norway, who at this time threatened the coasts, and who, in consequence of Olaf's defection, had to abandon his expedition. In 1236–7 Olaf appears, nevertheless, to have been in Norway on business to the king, and with the consent, moreover, of Henry III, who guaranteed the safety of his dominions during his absence (ib. pp. 363, 371). Shortly after his return he died on 21 May 1238 (Annals of England, i. 150; cf. Chron. Mannicæ, i. 94). Olaf had several sons: Harold (d. 1249), who succeeded him; Godfrey (d. 1238); Reginald (d. 1249), king of Man; Magnus (d. 1265, king of Man from 1252; and Harold (d. 1256) (Langebek, Scriptt. Her. Dan. ii. 212). [In addition to the authorities cited in the text, see Robertson's Early Kings of Scotland, ii. 98 seq.; Beck's Ann. Furnesienses, pp. 169, 187; Torfæus's Orcades, pp. 161–2; Hist. Rer. Norveg. iv. 195–6.]
<urn:uuid:da8845eb-8c23-45d8-bb15-87294cae7ff9>
CC-MAIN-2019-39
https://en.wikisource.org/wiki/Olaf_the_Black_(DNB00)
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573065.17/warc/CC-MAIN-20190917081137-20190917103137-00292.warc.gz
en
0.980692
1,846
2.6875
3
3.055406
3
Strong reasoning
History
FLAC stands for Free Lossless Audio Codec, an audio format much like MP3, however lossless, that means that audio is compressed in FLAC with none loss in high quality. FLAC refers to Free Lossless Audio Codec, a musical file format for lossless compression of audio information. FLAC recordsdata are compressed by 30-50% of their distinctive dimension, versus the common eighty% compression used by the MP3 format which makes it good for archiving. In contrast to different lossy audio compression codecs like MP3 , OGG, WMA, a decoded FLAC stream is bit-for-bit similar to the original uncompressed audio file. When burning a DVD with the suitable WAV files, be certain and use the UDF burning system. If you’re on a Mac, then it is simple, just drag the WAV recordsdata onto the DVD and click on burn. Mac’s use the UDF format natively and will produce excellent outcomes with none intermediary program. Uncompressed audio codecs retailer the audio knowledge as it’s recorded. This leads to massive recordsdata, however no data is misplaced, subsequently they’re appropriate for archiving unique recordings. The most common uncompressed audio format is PCM , which is often stored in a WAV or AIFF file. In Output Settings» area, select FLAC as output format. You possibly can even customise the metadata data, corresponding to title, creator, album, monitor and more. To choose ALAC as your output format, click on on the Format» alternative after which go to the Audio» menu. The software helps quite a lot of audio output codecs. You might alter the encode settings of the file such because the channel or the bit value by clicking the Settings» icon on the menu. The supported audio codecs may be displayed here and you can simply click on the M4A» format as your selection. Free Lossless Audio Codec (FLAC) is an audio compression codec primarily authored by Josh Coalson and Ed Whitney. As its name implies, FLAC employs a lossless data compression algorithm: a digital audio recording compressed by FLAC might be decompressed into an similar copy of the unique audio information. Audio sources encoded to FLAC are typically diminished to 50-60% of their original dimension. The first step is to pick out the information to convert alac to flac. Run ALAC to FLAC Convertor and use the Folder Explorer to browse to the files you need to convert. Then choose the file in the File Checklist, drag it to the Drop Zone and drop it there. Optionally, the ALAC to FLAC Converter allows you to edit the audio tags of any selected file in the Drop Zone. I’m really trying to playflac on iTunes to see if it wil help multichannel rips from DTS or DVD-A. I do know which you can play multichannel DTS rips ->alac on your ATV4 however I really want this to work with iTunes however I believe the problem is not just with the format, but a limitation with iTunes. Considering that it supports different surround formats passthrough like PCM it does not make sense to pay the licensing for use on AppleTV however completely neglect iTunes for practically 20 years now. Similarly, it’s absurd that Apple nonetheless wont supportflac, a free format, presumably as a result of they need folks usingalac becaus thats so bloody necessary…flac is great as a result of it is a absolutely taggable format that performs on all platforms- except iTunes! Apple’s deliberate only recreation on the town» schtick is admittedly getting outdated. AnyMP4 Audio Converter is likely one of the easiest applications to use and leroyepperson.wapamp.com affords you a wide selection of options to convert alac to flac and process each audio and video recordsdata. It may well convert audio recordsdata quickly with one of many fastest speeds recorded in our exams allowing the users to transform entire libraries with just some clicks of the mouse. Inserting IF NOT EXIST d:%~pI%~nI.m4a» into the above command after DO will skip changing recordsdata that exist already within the destination listing. FLAC can deal with resolutions from 16 bit at forty four.1 khz & 96 khz, 20 bits @ forty 4.1 (HDHC) & 96khz, all the way in which to 24 bits @ 196 khz, (SACD, DVD audio & Blu-Ray) it is applicable with practically each hi-end format other than iTunes. Disgrace on Apple. If you are not acquainted with FLAC, then it is best to get a clear view of this file — FLAC. FLAC, quick for Free Lossless Audio Codec, is an audio coding format for lossless compression of digital audio, and can be the name of the reference codec implementation. Digital audio compressed by FLAC’s algorithm can sometimes be lowered to 50 — 60% of its unique dimension and decompress to an identical copy of the original audio information. It is usually supported by additional hardware items than competing lossless compressed codecs which can have intellectual property constraints. A file conversion is just a change of the file that was created in a single program ( ALAC file) to a kind intelligible for an additional program (i.e. FLAC format). There are various web sites offering file conversion of ALAC to FLAC recordsdata «Online» — without having to download a particular program to your computer. Nevertheless, in case you have not found the appropriate ALAC file converter within the Web, you can use our checklist of packages to deal with the conversion of the ALAC to FLAC file.
<urn:uuid:7f4c0194-497f-4308-ad03-8d330c70d3d7>
CC-MAIN-2022-05
https://immaster.ru/convert-music-files-from-flac-to-alac/
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301341.12/warc/CC-MAIN-20220119125003-20220119155003-00302.warc.gz
en
0.890441
1,209
2.890625
3
2.023031
2
Moderate reasoning
Software
Definitions for hoplitodromos This page provides all possible meanings and translations of the word hoplitodromos The hoplitodromos or hoplitodromia was an ancient foot race, part of the Olympic Games and the other Panhellenic Games. It was the last foot race to be added to the Olympics, first appearing at the 65th Olympics in 520 BC, and was traditionally the last foot race to be held. Unlike the other races, which were generally run in the nude, the hoplitodromos required competitors to run wearing the helmet and greaves of the hoplite infantryman from which the race took its name. Runners also carried the aspis, the hoplites' bronze-covered wood shield, bringing the total encumbrance to at least 50 pounds. As the hoplitodromos was one of the shorter foot races, the heavy armor and shield was less a test of endurance than one of sheer muscular strength. After 450 BC, the use of greaves was abandoned; however, the weight of the shield and helmet remained substantial. At Olympia and Athens, the hoplitodromos track, like that of the diaulos, was a single lap of the stadium. Since the track made a hairpin turn at the end of the stadium, there was a turning post called a kampter at each end of the track to assist the sprinters in negotiating the tight turn — a task complicated by the shield carried in the runner's off hand. At Nemea the distance was doubled to four stades, and at Plataea in Boeotia the race was 15 stades in total. The numerical value of hoplitodromos in Chaldean Numerology is: 8 The numerical value of hoplitodromos in Pythagorean Numerology is: 8 Images & Illustrations of hoplitodromos Translations for hoplitodromos From our Multilingual Translation Dictionary Get even more translations for hoplitodromos » Find a translation for the hoplitodromos definition in other languages: Select another language: Discuss these hoplitodromos definitions with the community: Word of the Day Would you like us to send you a FREE new word definition delivered to your inbox daily? Use the citation below to add this definition to your bibliography: "hoplitodromos." Definitions.net. STANDS4 LLC, 2016. Web. 30 Jul 2016. <http://www.definitions.net/definition/hoplitodromos>.
<urn:uuid:59774b45-b817-454f-a68b-b2dc3cafdce0>
CC-MAIN-2016-30
http://www.definitions.net/definition/hoplitodromos
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258918071.75/warc/CC-MAIN-20160723072838-00140-ip-10-185-27-174.ec2.internal.warc.gz
en
0.909588
535
3.1875
3
1.222834
1
Basic reasoning
Sports & Fitness
One factor used to classify typestyles is alphabet proportioning, which is based on the width and height relationships of the letterforms that make up a font. This differs from other characteristics that are used to categorize fonts like stroke weight, angle, and/or ornamentation. There are three main types of alphabet proportioning; Old Style, Modern, and Fixed-Width. The letterforms of these three proportional classes vary in their ratio of width to height. For Old Style and Modern fonts the widths of the different letterforms within an alphabet often vary while their height does not. Also, different font styles exhibit a variety of widths with some appearing condensed and narrow, while others appear extended and wide. Many typestyles have been created as a family of fonts to offer a selection of different letterform weights and proportions. The font family concept began in the 1700's; however, it was Morris Benton who is credited with popularizing it in the 19th century. The purpose of providing various type form versions is to provide flexibility when handling complex hierarchical arrangements. In addition to including a variety of different letterform weights, a large font family will include varieties of width such as, ultra condensed, extra condensed, narrow, condensed, normal, Roman, wide, and extended. Old style proportioning originated during the ancient Roman Empire and is evident on the architecture and monuments such as the Trajan Column that remain in cities around Italy such as Rome, Pompeii, and Carthage. The proportioning is based on a geometric construct known as the Golden Section. The Golden Section, which was derived from observing plants in nature, was used to determine the aesthetic appearance of each letterform that comprised the Roman alphabet. Letters such as B, E, F, J, L, P, and S have strong vertical 2:1 height to width ratio emphasis, while A, D, H, K, N, R, T, U, V, X, Y, and Z are closer to a 9:8 nearly square ratio. Round letters such as C, G, O, and Q are based on a 1:1 ratio circle. The width of letters M and W extend beyond the height ratio to 9:10. |Old Style proportioning, serif. Modern is a relative term, and when referring to typography Modern can mean anything from the 18th century onward. Type foundries first introduced what we now call Modern alphabets in the 1790's. These modern proportioned fonts differ from their Old Style predecessors in that the type manufacturers decided to do away with the Golden Section inspired variety of letter widths, to settle on one common width on which to base all the alphabet letters. Although based on one common width, Modern proportioned letterforms have subtle width variations to compensate for optical deceptions; the intention is for all the letters to look as though they are the same width. Letters such as the M and W have been condensed; and C, G, O, and Q, although round letters, are no longer round; to match the common rectangular proportion of the rest of the letters. |Modern proportioning, serif. |Modern proportioning, sans serif. Also referred to as Super-Shape, the most recent classification, Fixed-Width proportioned alphabets, were introduced in the early 1950’s in Europe. Unlike Modern proportioned alphabets, Fixed-Width letterforms share a common structure, so they not only appear to be the same width; they physically are the same width. So not only do letters like C, G, O, and Q share a common structure, but other letters such as D, E, F, L, S, and U can also be based on an “O” structure. Some Fixed-Width fonts may be comprised exclusively of upper-case letters, while others contain both upper and lower-case. |Fixed-Width proportioning, serif. |Fixed-Width proportioning, sans serif.
<urn:uuid:5dbc4c41-2dcd-4301-a89b-561e51e27c59>
CC-MAIN-2024-10
https://www.theinformedillustrator.com/2013/10/typography-for-illustrators-6.html
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474893.90/warc/CC-MAIN-20240229234355-20240301024355-00566.warc.gz
en
0.940973
820
4.125
4
2.100307
2
Moderate reasoning
Art & Design
What is the Definition of Zero? Who Invented the Symbol? Date: 9 Jan 1995 12:18:50 -0500 From: David Chen Subject: Ask a question Hi, Dr. Math I am a student from Monta Vista's Internet class and I am here to ask you a question about math. I really wish that you can help me answer the questions. My question is: What is the definition of zero, and who invented or introduced the symbol to represent the zero? Thanks in advance. Sincerely, David Chen Date: 10 Jan 1995 03:48:57 -0500 From: Dr. Sydney Subject: Re: Ask a question Dear David, Hello! I'm glad you wrote to Dr. Math. The concept of zero is surprisingly deep, and it took human thinkers quite a long time to come up with the notion of zero. In fact, though mathematicians began thinking about the concept of zero in 2000-1800 B.C.E., it was not until about 200-300 B.C.E. that the Babylonians began using a symbol that would evolve into what we today know as zero. It turns out that mathematicians first thought of zero in the context of writing numbers down -- zero was first a placeholder. Before mathematicians understood the notion of zero, there was much ambiguity about written numbers. For instance, if the symbol for 5 was written down, there was no way to tell what number was being expressed -- was it 5? Or, 50? Or, 5,000,000? Thus, zero was introduced as a placeholder to avoid these ambiguities. In India, the concepts of 0 as a placeholder and 0 as a number were associated with one another much earlier than in Babylon. It is from the Indians that we get our present-day symbol for 0. I don't have an exact definition for 0 here with me at home. I can tell you this: when working with sets or groups of elements under some defined operation of addition, the "zero element" is defined as the element, let's call it z, such that a + z = a for all a in the set or group. So, one definition you could use for 0 is that 0 + x = x for all real numbers x. Alternatively, you might define 0 as the number in between the positive and negative numbers. Or, maybe you could define 0 as lacking quantity (that's what the dictionary says!) What do you think about these different definitions? If you are looking for a different definition, write us back, and we'll try to find a better one. I hope this helps. Write back if you have any questions. --Sydney, Dr. "whoa" math Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
<urn:uuid:ea0e4f87-efea-463b-9fb5-a8f26cde1f67>
CC-MAIN-2014-49
http://mathforum.org/library/drmath/view/59074.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007625.83/warc/CC-MAIN-20141125155647-00146-ip-10-235-23-156.ec2.internal.warc.gz
en
0.966758
575
3.359375
3
2.339078
2
Moderate reasoning
Science & Tech.
One of my favourite things to do during my time as a teacher was to set up schedules for my classroom, plan out lessons and units, and help students stay on track with their learning and with their assignments. As a young mom back then, I thought it would be a good idea to use the same kind of set up at home with my own kids around scheduled feeding, sleep time, and play time. As my own kids grew and my role as a teacher of teens continued, I realized more and more that kids of all kinds thrive from structure, routine and predictability. All of these things help our kids with their executive function. In school, teachers provide schedules, structures and routines to kids that, over time, become a way of life. The benefits of this kind of structured functioning became clear to me as my students and my own children entered the teenage years. In my roles as a mom and a teacher I was able to witness the advantages of good planning skills in teens firsthand, and the troubles that can arise for kids when organizational skills fall apart. These kinds of planning skills are known as executive function skills (meaning the skills you need to execute tasks). What most parents and teachers don’t realize is that the full scope of executive function doesn’t just include planning and organizing, but also includes: - Getting started - Following through on tasks - Goal-directed persistence - Performance monitoring - Emotional regulation With the latest research in neuropsychology, we’re discovering that it can take up to 25 years for executive skills to fully develop! In other words, executive skills are dependent on brain development over time. This development happens in the prefrontal cortex – the part of the brain just behind the forehead. Once I started to learn more about executive skill development in kids and teens, I became particularly concerned about kids who had challenges with executive skills. These are the kids who underachieve because of weak skills in organization and time management, which in turn prevents them from working to their potential or achieving their goals. In many cases these kids have had chronic problems throughout school and may have developed a negative history there. Sometimes these kids have been labelled as lazy, irresponsible and not caring about their own success and achievement. These children are largely misunderstood. For kids with attentional disorders and learning challenges, these skills develop even more slowly and are more sensitive to disruption. Stress and Executive Function Skills: Getting Through School Closure And Online Learning In The Time of A Pandemic At the time of our school closures when typical schedules and routines disappeared, and teacher support for project completion, time management and organizational skills was unavailable, many students with weak or immature executive skills floundered. In fact, many students of all abilities, including high achieving students, struggled without the day-in, day-out support that teachers typically provide through face to face connections and organizational supports in classrooms. Even more importantly, in times of stress (such as during the current pandemic), everyone’s executive skills are taxed. From a survival point of view, right now is the time when our brains are hard-wired to focus on the immediate needs in our environment and whatever is causing our stress. This in turn decreases the resources that usually get directed to executive skills, leading to reductions in working memory, emotional regulation, sustained attention and goal-related persistence – just to name a few! When Kids Are Stretched And Stressed During the pandemic, many parents are struggling to contain their own worries about jobs, lost income and health conditions related to the COVID-19 virus. When kids begin to understand what their parents are worrying about, they start to worry too. To add to the strain, the familiarity and routine of school as well as the many supports at school that provide security to students have disappeared. This support often includes nutrition breaks, feelings of love and belonging, and connections with teachers and peers who care for them. Finally, increased expectations that kids manage their school work on their own when daily routines disappeared tended to overload many students and contributed to a significant amount stress and difficulty completing work. This stress can result in reduced mental resources that are normally devoted to executive function, causing significant difficulties for kids in coping emotionally and keeping up with learning at home. How Can I Help As An Executive Skills Coach? Moving forward, as we all wait to hear from our Education Minister regarding school opening plans, we can be thinking about how to best support kids in this upcoming school year, no matter what it brings. The best approach (at any time, but especially at a time like this) is to view executive functioning difficulties as obstacles, rather than character flaws or poor choices. If we approach kids using problem-solving strategies that include a sympathetic ear, trauma-informed practice (relationships matter!) and some open-ended questions and discussions, kids are more likely to work with us, do better and feel better. Many parents regularly use coaching as an option when teens push back against attempts to teach new skills to help them manage the details of life. Coaching is a process that keeps the pressure and the meltdowns away from parents, preserves family relationships at a time when they matter most, and helps kids develop the skills they need to adapt to new realities with resilience. Through coaching, kids can become the independent, self-sufficient individuals they want to be (and that their parents want to see), even during a pandemic. As a coach, I work with kids to support their emotional health and well-being, help them identify their goals, and make daily plans to achieve them. This might include keeping up with assignments, advocating for accommodations at school, improving grades or even getting a job. I work hard to help kids feel autonomous and make important decisions about the goals that they want to work towards. At a time like this, our kids need a helping hand to navigate their way through very unsettling times, all the while keeping their eye on the prize – there is a way through this! As a consultant, I offer advice and strategies to kids, leaving the final decisions in their hands! In this way, a pre-teen or teen’s success building small goals will build a base for achieving bigger goals over time. I firmly believe that with help, kids can overcome the hardships that have suddenly landed on them and feel proud of themselves for prevailing. My role in the life of your child and your family in my practice at Alongside You is to offer support to help kids build executive function skills and feel successful, help your kids survive the pandemic and the continued upcoming changes in school life, and to help all of you stay connected and learn to rise above the current schooling challenges due to the pandemic. If you would like to meet with me for a consultation regarding your child’s progress, please contact us and we will be in touch with you soon. Secure video appointments are a safe, kid-friendly space to meet virtually and shake-off the anxiety, despair and overwhelm and gain some ground as we approach our new normal at school. Reach out for help, relieve worry and remember that a helping hand is what is most needed for kids at this time in order to feeling better, learn better and do better. I look forward to working with you and your kids! ADHD is one of the most prevalent psychiatric issues in our society. According to current Canadian statistics, a conservative estimate is that 4% of adults and 5% of children experience ADHD worldwide. It is also one of the most treatable conditions, and often medications can be very helpful. ADHD is a neurodevelopmental disorder that primarily affects the frontal lobe of the brain and impacts executive functioning. What this means is that people suffering from ADHD often experience problems with attention, hyperactivity, decision making, mood regulation, and more. We see it in children very frequently here at Alongside You. The challenge is that it’s often misdiagnosed, or mis-attributed. Kids with ADHD are often labeled the “bad kids,” or it is assumed that they’re just behaving badly, for no apparent reason. While I can understand this, we have to ask ourselves, “if we suffered from some, or all of the symptoms above, how would manage this in our lives?” The answer, I’m confident, would be a resounding, “not well.” As I’ve already mentioned, ADHD is quite treatable most of the time, and most often it involves medications. What if the medications don’t work, or don’t work as well as it was hoped? What if the side-effects outweigh the benefits? What if you just don’t want to use medication? This is where neurofeedback training can help. While medications can be a very helpful treatment, there can be problems, or there can be no effect. Neurofeedback training can be of help with ADHD in a few specific ways. Here are a few ways it can be beneficial. Improving Executive Function Executive function is a primary mechanism of our brains. It helps us with many things, including decision making, organizing, impulse control, and many others. ADHD can make these functions very difficult. Neurofeedback can help this is two primary ways. First, the training can help the brain optimize its inherent abilities. The training can help regain function in the frontal lobes, and also, can help optimize the function that is already there through strengthening existing neural connections, and creating new ones. Second, neurofeedback training can help the limbic system calm down. Here’s why that’s important. The limbic system controls our fight or flight response. There is mounting evidence that limbic activity, particularly an overactive limbic system, is involved in particular forms of ADHD, and also in aspects of any form of ADHD. When our limbic system activates, its’ job is to keep us safe. Here’s the problem – it can’t tell the difference between anxiety, fear, or stress. Think of the kids you know with ADHD and how often you see these three things in their presence. When the limbic system activates and becomes highly engaged, it shuts off the frontal lobe. Lights out. What this means, is no more executive functioning. Therefore, it stands to reason that if we can reduce the activity of the limbic system, it will help preserve executive functioning. Neurofeedback training can help the limbic system relax through training that area of the brain, and also through interacting with the central nervous system (CNS) and reducing activation. Mood regulation, or the lack thereof, is often a part of the presentation of ADHD. Our brains are our bodies are integral in our emotion regulation and management. Through training the brain and the CNS, neurofeedback can help to optimize the emotion centres of the brain and relax the CNS. If our emotion centres are running optimally and our CNS is less stressed, our emotions stay more consistent and manageable. Many individuals with ADHD have difficulty sleeping. One of the advantages of ADHD is that many folks with ADHD are very creative. The downside of this is that thoughts are many, and can run rampant. Bedtime is one of the quietest parts of our day and nothing is there to stop our thoughts from running free! Neurofeedback can often help regulate our sleep patterns through brain training, CNS activity regulation, and reduction of stress and anxiety. If we do these things, and sleep improves, our overall stress level goes down, the brain runs more optimally, and our emotions stay more in control. The brain is an amazing organ in our bodies, and central to all of our functioning. ADHD impacts the brain in many strange and wonderful ways. While treatment for ADHD should always be multimodal, neurofeedback training can be a very valuable tool for kids and for adults struggling with this condition. If you’re interested in trying it, please contact us or give us a call. If you have any further questions, we’d be happy to answer them! One of the most exciting uses for neurofeedback therapy is in children struggling with Attention Deficit Hyperactivity Disorder (ADHD). ADHD can be one of the more difficult issues to treat and it causes a great deal of distress to many children, their parents, and school staff. We use neurofeedback (also called EEG Biofeedback) here at the clinic to help these kids reduce their symptoms and improve their functioning. You might wonder, why would we use neurofeedback for ADHD? There are a few reasons we like it and they correspond to the results we see with the children we work with. I hope it helps explain the usefulness of neurofeedback for ADHD in children. We often don’t notice the effects of poor sleep when we are children, but as a parent, I can definitely notice when my kids don’t sleep well. Further, now as an adult, I’ve become keenly aware of how lack of sleep affects my functioning. Neurofeedback can help the brain recalibrate and improve its function so that sleep improves, which in turn, improves attention, focus, and motivation – some of the core areas affected by ADHD. Improved Attention and Focus I have a number of clients with ADHD, and they know that my brain sometimes does the same things as theirs, and so if there’s a loss of focus in session, invariably one of us will turn our head and exclaim, “Ooh, squirrel!” This usually leads to a great deal of laughter and a refocusing in our session. Attention and focus are hallmark symptoms of ADHD, and neurofeedback can help with this by training the brain to function more optimally. Contrary to popular belief, children with ADHD aren’t overstimulated, they’re chronically under-stimulated. Because of this, their brain will find ways to stimulate itself, which usually means hyperactivity or fidgeting. Neurofeedback can help recalibrate and rewire the brain on this level and reduce the need for stimulation, improving these symptoms. Neurofeedback Targets Brains At The Biological Level Without Medication One of the most common interventions for ADHD is medication. Now, just to be clear, I am not anti-medication at all. It is a very useful tool and has its place in treatment. Medications, however, don’t always work, sometimes they have side effects that are worse than the condition being treated, and sometimes clients don’t want to be on medications. Neurofeedback is another way of getting at the brain biology and rewiring it to improve functioning. It can also potentially augment the effects of medication if the medications are not working as well as they could. Sometimes neurofeedback can potentiate medications and lead to less medication being needed, or the ability to stop the medication altogether. Finally, if a client and or family does not wish to use medications, neurofeedback can do many of the same things medication can in helping the brain function better. Neurofeedback Is Easy Every parent knows that getting children to participate in treatment can be difficult, especially a child with problems with focus and attention and impulse control. This is one of the benefits of neurofeedback therapy – if a child can sit in a chair and look at a screen and listen to an audio, they can do neurofeedback. We can even show movies through our equipment to keep them engaged when necessary. We can also pair the neurofeedback with creating art, reading a book, or other activities to keep the child engaged. Neurofeedback is flexible, straightforward, and easy for clients to participate in. We can adapt the environment and treatment to fit client needs and comfort. We can also tailor the treatment frequency to suit client availability and financial resources. Neurofeedback Is Accessible We know our clients lead busy lives, particularly when it comes to children and their activities. This is why we use equipment that we can send home with clients on a monthly rental basis. This has a number of advantages: accessibility, efficiency, and affordability. By doing home rentals, you can do neurofeedback in the comfort of your own home, on your own schedule. You can do training sessions as often as you like, which can help speed up the process and the results. It also makes things more affordable – for one monthly fee you can do as many sessions as you like, and you can even train the whole family for the same price! Are You Curious About Neurofeedback? I hope so! If you have any further questions, please give us a call and we’ll be happy to answer them. We can provide neurofeedback in our clinic, or we can send a rental unit home with you if it seems to be the best solution for you and your family. We love using neurofeedback to help children with ADHD, because we know it works, and we know kids love it. We love it because we see the results and the changed lives!
<urn:uuid:a047fced-830c-4c75-bda8-9e016418ca05>
CC-MAIN-2021-49
https://www.alongsideyou.ca/category/adhd/
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00500.warc.gz
en
0.963787
3,529
3.140625
3
2.851033
3
Strong reasoning
Health
5 Letter Words Beginning With No – Word finder is a tool that lets you search for words with five letters. You can either search for words with specific letters or in the Unknown field. Another option is word lists. Word lists are utilized to find words that have repeated letters. These methods work best for words that have five letters. These strategies are only effective if you know exactly what word you want. Unknown position field A wordfinder can be used to locate words with 5 letters. The word finder online is a free application that lets users search for words simply by typing in the word’s position. It can also be used to solve puzzles and search for words. Words that search for include specific letters There are many ways you can search for words with specific letters. The easiest way to search for letters that are specific is to make use of the word maker. If you select O as the letter that ends in the last and BE as your first letter The word maker can create all words that include O. Every word can contain up to three letters prior to it and two after it. The word with the longest length will comprise of six letters. It can be used to create words like ABODE (beforeground), BOCA (afterword), and BOCA (afterword). Another option is to use an online word finder. Word finders will have different boxes for each letter. To search for words that begin with the letter, enter the letter that starts with the first box. Next, type the second letter into the second box. For the remaining letters, you can repeat the process. A Dictionary search tool is an additional method of searching for words that contain specific letters. This tool lets you to type in words in the search box, and then use the search function to discover its meaning. This tool is helpful for solving crosswords as well as arrowords and rhymes. These puzzle-solvers love this tool because it helps them discover words with certain letters. Using word lists Word lists that contain 5 letter words are extremely useful for a variety of reasons. For instance, they could help you identify words that have many vowels. Word lists can assist you in imagining various word combinations. Word lists don’t need to be alphabetical. Making word lists with five letter words is a good way to teach children new words. To help children master new words, they can participate in word games or work on word puzzles. Kids will be delighted to discover new words and increase their vocabulary. For parents, 5 letters for children is an exciting way to help your kids learn English. Wordle is a game that you can play with wordlists, is a very well-known. Wordle allows you to guess five letter words within as little as six attempts. Sometimes, you’ll run out of suggestions or clues to help you determine the word. To help you determine the word, look up a list of words in case you don’t have any ideas. Recognizing repeated letters It is easy to remember letters by looking up the repeated letters in five letter words. These links will lead you to any phrase which has repeated letters. These words are typically linked in pairs. It is also easy to spot the letter S, so it is common to find a double S at the start or the end of a five word. Similar to the letter T the letter S can appear in the five letter word. Wordle provides a list with acceptable words. The list contains words that have the word-of-the-day as well as clues to the vowels in words. You can narrow your list by searching for words that contain vowels or repeat letters. Beware of words that have unusual spellings or vowels that are frequently repeated. A method to find repeated letters in five word words is to employ a simple scoring system. When you divide the number of letters per word by their average frequencies and you will get an estimation of the number of times a word is repeated. In other words, the score will be higher if the letter occurs more frequently in a 5-letter word.
<urn:uuid:f1c45f6e-0418-4021-8511-f89535d74cbd>
CC-MAIN-2023-06
https://www.smartfishtechnologies.com/5-letter-words-beginning-with-no/
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00866.warc.gz
en
0.939857
843
2.6875
3
0.984642
1
Basic reasoning
Literature
Why Yoga is Necessary to Live Healthy Life Physical fitness, strong muscles, disease-free life, flexible body, etc. - these are some of the terms we use to define good health. External appearance is what this world appreciates. While physical health is an important aspect of one’s life, people have to understand that it cannot be enough to live happily. One needs to enjoy mental fitness, spiritual well-being as well as social happiness in order to be completely fit and cheerful. In the era of globalization and scientific development, people are going away from nature which has a negative impact on the health and mind. Since Yoga focuses on the natural way of living, its relevance has become even more intensified. Yoga covers all aspects of human wellness Yoga is an ancient science that took birth more than 5000 years ago in the sacred land of India. The science believes in the theory of complete fitness. Physical, mental, and emotional - all features of health are covered under Yoga. There is probably no single practice/philosophy in the whole world that sponsors complete health, makes people happy. - Yoga Asanas: Asanas as considered the best way to make the body strong in an entirely natural manner. Since the Vedic era, yogis of India have been practicing the beauty of Yoga asanas to remain physically fit. Whether you want to have a disease-free or muscular body, Yoga helps you. Talking about a few poses, Malasana, Bakasana, Utkatasana, etc. are extremely effective to get strong and happy muscles. There are a number of diseases that can be cured by Yoga asanas. Yoga experts advocate the practice of Bhujangasana, Setu Bandhasana, Baddha Konasana, etc. to get rid of Asthma. Back pain can be relieved by the practice of Spinal Twist and Adho Mukha Svanasana. If you are unable to enjoy better sleep, Padangusthasana and Uttana Shishosana are two of the finest asanas to practice. Numerous people around the world experience a better body structure with daily practice of Yoga poses. While many people think Yoga asanas are just useful for the body, they are also an amazing contributor to one’s mental energy level as well. - Meditation and Pranayama: Mental health is as important for the overall well-being of the body as the physical one. Yoga believes that a muscular body is of little use when the mind is full of stress and anxiety. Especially in the modern world, when there is so much chaos in all walks of life, mental peace is required the most. Meditation is considered the best exercise for bringing peace to the mind. It is the state of complete silence and takes the practitioner into a domain where there is nobody but you and the soul. Meditation is all about focus, one of the most important things needed to enjoy success in life. Yoga also teaches us the power and substance of breathing. The practice of Pranayama is all about experiencing the beauty of breathing and letting pranic energy sail through the body in an astonishing fashion. Those who practice the breath-based exercise every day notice a new kind of vibrancy flowing in their blood that gets purified with this practice. - Spiritual Yoga: Amid all the popularity of Yoga as a physical exercise, the truth is that it is a spiritual science. The original configuration of Yoga exists in the form of a peaceful practice of love, spirituality, and devotion. Yoga defines health as the purest state of the soul in which it is far away from all kinds of negativities. One can only attain this state in the shadow of Shiva - the ultimate yogi. Devotion to God is the only way to enjoy complete freedom from negative energy. Yoga teaches you to love nature by worshipping rivers, land, the sun, the moon, etc. One is exposed to such beautiful things only when he/she embraces Yoga. Although just three Yoga types are talked here, it is imperative to know that Karma Yoga has equally influenced people to walk on the path of inner happiness and wellness. Detoxification techniques of Yoga Yoga’s magic is unlimited. It has spread its wings in the domain of detoxification and weight loss. According to Yoga, toxins (ama) are the major enemy of human health. Toxins get accumulated in the body to disturb the internal physiology of the body. The detoxification theory of Yoga hits out on the extra unwanted toxins to remove them from the body. One of the primary ambitions of today’s generation is to have a flat belly, which Yoga can offer as well. Phalakasana, Chaturanga Dandasana, Pranayama including many more yogic practices are effective in detoxifying the body and losing fat. The most prominent reason why Yoga is necessary is that, people need it. The ancient science of yoga is a way of life rather than just being a practice. Embrace the delight of Yoga in Nepal, India, Thailand, Bali or anywhere in the world to live and understand a wholesome life.
<urn:uuid:92727fd2-c614-46b3-8c59-6239a3e096cc>
CC-MAIN-2020-34
https://www.yogatrainingnepal.com/why-yoga-is-necessary-to-live-healthy-life
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00149.warc.gz
en
0.950966
1,056
2.75
3
2.292193
2
Moderate reasoning
Health
The technologies may well result in very produced synthetic intelligence that can instantaneously comprehend what it sees and has takes advantage of in robotics and self-driving vehicles. Researchers at the University of Central Florida (UCF) have developed a machine for artificial intelligence that replicates the retina of the eye. The analysis may well result in cutting-edge AI that can recognize what it sees proper absent, such as automated descriptions of photographs captured with a digital camera or a telephone. The technological know-how could also be employed in robots and self-driving cars. The know-how, which is described in a modern examine posted in the journal ACS Nano, also performs superior than the eye in terms of the range of wavelengths it can understand, from ultraviolet to noticeable light and on to the infrared spectrum. Its skill to combine three various operations into 1 further more contributes to its uniqueness. At this time accessible clever graphic know-how, this kind of as that observed in self-driving autos, desires individual knowledge processing, memorization, and sensing. The scientists claim that by integrating the a few processes, the UCF-created system is a great deal speedier than existing technological know-how. With hundreds of the devices fitting on a just one-inch-extensive chip, the engineering is also fairly compact. “It will improve the way artificial intelligence is recognized now,” says review principal investigator Tania Roy, an assistant professor in UCF’s Section of Products Science and Engineering and NanoScience Engineering Centre. “Today, almost everything is discrete elements and working on regular hardware. And below, we have the potential to do in-sensor computing working with a single gadget on one modest system.” The technological innovation expands on previous perform by the exploration workforce that produced mind-like equipment that can enable AI to do the job in remote locations and area. “We had products, which behaved like the synapses of the human mind, but nonetheless, we had been not feeding them the image instantly,” Roy says. “Now, by introducing picture sensing skill to them, we have synapse-like gadgets that act like ‘smart pixels’ in a digicam by sensing, processing, and recognizing visuals at the same time.” For self-driving vehicles, the versatility of the system will permit for safer driving in a selection of conditions, which include at night time, says Molla Manjurul Islam ’17MS, the study’s lead writer and a doctoral student in UCF’s Department of Physics. “If you are in your autonomous car or truck at night time and the imaging program of the car or truck operates only at a individual wavelength, say the obvious wavelength, it will not see what is in front of it,” Islam claims. “But in our situation, with our unit, it can basically see in the whole affliction.” “There is no noted unit like this, which can run concurrently in ultraviolet range and noticeable wavelength as properly as infrared wavelength, so this is the most special providing issue for this device,” he suggests. Key to the technological know-how is the engineering of nanoscale surfaces designed of molybdenum disulfide and platinum ditelluride to allow for for multi-wavelength sensing and memory. This perform was performed in shut collaboration with YeonWoong Jung, an assistant professor with joint appointments in UCF’s NanoScience Technological innovation Heart and Section of Components Science and Engineering, element of UCF’s College of Engineering and Pc Science. The researchers examined the device’s Reference: “Multiwavelength Optoelectronic Synapse with 2D Materials for Mixed-Color Pattern Recognition” by Molla Manjurul Islam, Adithi Krishnaprasad, Durjoy Dev, Ricardo Martinez-Martinez, Victor Okonkwo, Benjamin Wu, Sang Sub Han, Tae-Sung Bae, Hee-Suk Chung, Jimmy Touma, Yeonwoong Jung and Tania Roy, 25 May 2022, ACS Nano. The work was funded by the U.S. Air Force Research Laboratory through the Air Force Office of Scientific Research, and the U.S. National Science Foundation through its CAREER program.
<urn:uuid:83d8e702-5e41-41fe-870d-17c287feb55f>
CC-MAIN-2023-23
https://stpetewaterfrontrentals.com/new-technology-gives-ai-human-like-eyes.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646144.69/warc/CC-MAIN-20230530194919-20230530224919-00541.warc.gz
en
0.937571
918
3.390625
3
2.792076
3
Strong reasoning
Science & Tech.
GORE, THOMAS PRYOR (1870–1949). Born on December 10, 1870, near Embry, Mississippi, to Thomas M. and Caroline Elizabeth Wingo Gore, U.S. Sen. Thomas P. Gore lost the sight of both eyes in two separate accidents as a young boy. Even as a teenager he was an outstanding orator and became active in politics before he could vote. He graduated from the normal school at Walthall, Mississippi, in 1890 and taught school before attending law school at Cumberland University in Tennessee, graduating in 1892 and being admitted to the Tennessee bar. In 1900 he married Nina Kay, who was thereafter to function as his "eyes." After joining the national Populist movement, Gore moved to Corsicana, Texas, to practice law in 1894. In 1895, returning to Mississippi, he was an unsuccessful Populist candidate for Congress in 1898. With the defeat of William Jennings Bryan as the Populist presidential nominee in 1896, the party began to decline, and soon Gore became a Democrat. In 1901 Gore moved to Lawton, Oklahoma, and he continued to practice law. In 1903 he was elected to the Oklahoma Territorial Council. A renowned orator even from youth, he frequently offered elegant and quotable comments such as, "I would rather be a humble private in the ranks of those who struggle for justice and equality than to be a minion of plutocracy, though adorned with purple and gold." He served in the council until 1905. When Oklahoma was admitted as the forty-sixth state in 1907, he was elected one of Oklahoma's first two U.S. senators. His actions bespeaking those of a Progressive Democrat, he was reelected in 1908. As a former Populist, he easily represented the interests of farmers, who opposed railroad monopolies and high rates and demanded railroad regulation. Using his oratorial skill, he advanced the cause of Democratic presidential nominee Woodrow Wilson in 1912. Becoming Wilson's trusted personal ally, he served on the Democratic National Committee from 1912 to 1916, assisting the president in a sweeping reorganization of the party. Gore turned down offers for a presidential cabinet position to keep his U.S. Senate seat from Oklahoma, a post to which he was reelected in 1914. The 1914 Oklahoma senatorial campaign was one of the dirtiest in American history. Gore's enemies set a trap for the blind senator. A woman named Minnie Bond lured him to her hotel room under the guise of speaking with him about an appointment for her husband. After a staged rendezvous, the woman claimed he had taken advantage of her. Prosecutors refused to file criminal charges, but Bond lodged a fifty-thousand-dollar civil lawsuit against Gore, who was defended by legendary Oklahoma lawyer Moman Pruiett. Pruiett labeled the proceeding, which captured headlines across the nation, a "political trial." The jury unanimously voted to exonerate the senator. Gore often voted in support of President Wilson's New Freedom legislation, including the establishment of the Federal Reserve System, the Federal Trade Commission, and woman suffrage. In the spring of 1913 Gore was appointed to his most cherished position in the Senate, chair of the Committee on Agriculture and Forestry. He also served on the Senate Finance Committee, the Committee on Railroads, the Committee on Expenditures in the Department of Justice, and the U.S. Commission to Investigate and Study Rural Credits and Agricultural Cooperative Organizations in European Countries. The study resulted in legislation that established a system of privately controlled land banks to operate under federal charter. Gore was an isolationist. He opposed American involvement in World War I, primarily because he believed that tax money should be spent only on agricultural programs, rather than armies and munitions. He became anti-administration on most war legislation and evoked the ire of many Oklahoma newspapers and voters. One newspaper called him "the Kaiser's right hand in America." Another paper demanded that he represent the pro-war stance of a majority of Oklahomans or resign. His antiwar stance also cost him his close personal friendship with President Wilson. Gore was unrelenting in his opposition to the war, opposing a draft to fill the ranks of the army and voting against the selective service act. Once the United States entered the war, however, he fully supported the effort. Nevertheless, even after the successful conclusion of the war, he did not change his stubborn isolationism. He believed America had been duped into entering the conflict, and he often quoted George Washington's advice to avoid entangling alliances. The war over, Gore's previous isolationist stands remained in the minds of the electorate. He was defeated in his reelection bid in 1920, largely because of his antiwar sentiments and his opposition to the popular President Wilson's plan for the League of Nations. After his defeat Gore practiced law in Washington, D.C. Running again for the Senate in 1930, he tied his campaign to the "cheese and crackers" campaign of Gov. William H. "Alfalfa Bill" Murray and was reelected. Back in the Senate, Gore criticized Republican Pres. Herbert Hoover's recovery policies during the Great Depression. He campaigned for successful Democratic presidential nominee Franklin D. Roosevelt in 1932 but soon became an outspoken opponent of the new president's recovery programs as well. The fiscally conservative senator vehemently opposed federal control of industry and relief efforts, believing instead that state and local leaders should guide the efforts to recover from the depression. He voted against the National Industrial Recovery Act, abstained from voting on the Agricultural Adjustment Act, and vocally opposed public welfare legislation. Most leaders wanted New Deal programs for Oklahoma and its cities and towns. When Gore threatened to vote against the Roosevelt legislation, he was again attacked in newspaper editorials and was barraged by letters from Oklahomans hurt by the economic downturn. Citizens booed him at political campaign rallies. Accused of being too conservative to support Roosevelt's relief efforts, Gore lost in his reelection campaign in 1936. Some thought he was out of step with the times, and he was removed from the political scene. He practiced law in Washington, D.C., until his death on March 16, 1949. His grandson is historian and author Gore Vidal. Thomas P. Gore's most important contributions were his staunch support of the oil industry, soil conservation, and American Indian tribal issues. But he may be best remembered for his glowing tribute of his adopted state. He said, "I love Oklahoma. I love every blade of her grass. I love every grain of her sands. I am proud of her past and I am confident of her future. The virtues that made us great in the past can keep us great in the future. We must march, and not merely mark time." Monroe L. Billington, "Senator Thomas P. Gore," The Chronicles of Oklahoma 35 (Fall 1957). Monroe L. Billington, Thomas P. Gore: Blind Senator from Oklahoma (Lawrence: University of Kansas Press, 1967). Robert Henry and Bob Burke, Thomas P. Gore: A Clearer Vision (Edmond: University of Central Oklahoma Press, 2002). Paul D. Travis, "Gore, Bristow, and Taft: Reflections on Canadian Reciprocity, 1911," The Chronicles of Oklahoma 53 (Summer 1975). The following (as per The Chicago Manual of Style, 17th edition) is the preferred citation for articles: Bob Burke, "Gore, Thomas Pryor," The Encyclopedia of Oklahoma History and Culture, https://www.okhistory.org/publications/enc/entry.php?entry=GO013. © Oklahoma Historical Society.
<urn:uuid:88248b76-61f5-48a3-b230-2ef706e04014>
CC-MAIN-2019-18
https://www.okhistory.org/publications/enc/entry.php?entry=GO013
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00423.warc.gz
en
0.974063
1,580
2.8125
3
2.868316
3
Strong reasoning
History
During the early days of the current pandemic, false information regarding the virus and its treatment became very common which urged health agencies to address the issue and warn against myths and rumors about the infection. Despite the continuous emphasis on the dangers of bogus methods for prevention and cure, many people are continuing to follow some of them, including making DIY coronavirus vaccines at home. At the moment, multiple clinical trials around the world are testing different formulas for the development of an effective vaccine for the novel coronavirus. Till now, no vaccine has been launched in the market or made available to the public for usage. Last month, only Russia had announced the launch of their first vaccine for the prevention of coronavirus. Though reports on the vaccine show that it was safe and showed no side effects during its first two phases of a clinical trial, the vast majority of the medical community believes otherwise and deems the vaccine as unreliable without the mandatory phase three. Moreover, even if the vaccine has been approved, it is not available in the market and is only being used in a selected group of people in Russia. Overall, health experts are not sure when will an effective vaccine be developed and used in the general public in order to end the pandemic but most agree that it can take several months or even another year for this to happen in nearly all of the countries affected by the health crisis. This means that all countries will have to continue implementing restrictions for the coronavirus pandemic. Some may even have to enforce another total lockdown in order to control the growing number of cases but experts state that such measures will be necessary even with the vaccine in the market. Following the given preventive measures and distribution of vaccines have to be combined to control the pandemic effectively. Without either of them, the crisis is unlikely to end. So, people have to be patient and continue to follow guidelines and avoid any fast cures or bogus methods for safety against the virus. However, there are still many people who have adopted bizarre measures to avoid the virus. Some of such methods can just do nothing at all while others can be extremely dangerous to health and even cause death, including injecting DIY coronavirus vaccines. The practice of using homemade vaccines has become so prevalent that a new paper, whose findings appear in the journal Science, has accentuated the need for awareness regarding the potentially deadly effects of using a vaccine made at home. In addition, taking untested vaccines developed by random vaccine developing groups or people is also equally dangerous. The practice can lead to legal consequences and new issues that may damage public health further. Even though the vast majority and even the medical community wants the pandemic to end as soon as possible, going for fast treatments and “quick fixes” can have the opposite impact and harm the progress made over it instead. Therefore, taking untested vaccines or other similar practices should be avoided.
<urn:uuid:c0583b91-be5e-407e-9191-465d77413ccc>
CC-MAIN-2021-17
https://sparkhealthmd.com/people-are-now-making-dangerous-diy-coronavirus-vaccines-at-home/3324/
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00544.warc.gz
en
0.964242
586
3.140625
3
3.057892
3
Strong reasoning
Health
Richard Strauss (1864 - 1949) Richard Strauss enjoyed early success as both conductor and composer, in the second capacity influenced by the work of Wagner. He developed the symphonic poem (or tone poem) to an unrivalled level of expressiveness and after 1900 achieved great success with a series of impressive operas, at first on a grand scale but later tending to a more Classical restraint. His relationship with the National Socialist government in Germany was at times ambiguous, a fact that protected him but led to post-war difficulties and self-imposed exile in Switzerland, from which he returned home to Bavaria only in the year of his death, 1949. Richard Strauss created an immediate sensation with his opera Salome, based on the play of that name by Oscar Wilde. Collaboration with Hugo von Hofmannsthal followed, resulting in the operas Elektra and the even more effective Der Rosenkavalier in 1911, followed by Ariadne auf Naxos. Der Rosenkavalier (‘The Knight of the Rose’) remains the best known of the operas of Richard Strauss, familiar from its famous concert waltz sequence. From Salome comes the orchestral ‘Dance of the Seven Veils’, which occurs at an important moment in the drama. The late opera Die Liebe der Danae (‘The Love of Danae’), completed in 1940, may also be known in part from orchestral excerpts. Other operas are Die Frau ohne Schatten (‘The Woman Without a Shadow’), Die ägiptische Helena, Arabella, Intermezzo, Daphne and finally, in 1941, Capriccio. In the decade from 1886 Strauss tackled a series of symphonic poems, starting with the relatively lighthearted Aus Italien (‘From Italy’) and going on to Don Juan, based on the poem by Lenau; the Shakespearean Macbeth; Tod und Verklärung (‘Death and Transfiguration’); Till Eulenspiegel, a study of a medieval prankster; Also sprach Zarathustra (‘Thus Spake Zarathustra’), based on Nietzsche; a series of ‘fantastic variations’ on the theme of Don Quixote; and Ein Heldenleben (‘A Hero’s Life’). Concertos by Strauss include two for the French horn, an instrument with which he was familiar from his father’s eminence as one of the leading players of his time. There is an early violin concerto, but it is the Oboe Concerto of 1945, revised in 1948, that has particularly impressed audiences. Other Orchestral Works Strauss wrote various other orchestral works, some derived from incidental music for the theatre, music for public occasions or his operas. The Symphonia domestica and An Alpine Symphony may rank among the symphonic poems, in view of their extra-musical content, while the poignant Metamorphosen for 23 strings, written in 1945, draws inspiration from Goethe in its lament for what has been lost. In common with other German composers, Strauss added significantly to the body of German Lieder. Most moving of all, redolent with a kind of autumnal nostalgia that is highly characteristic, are the Vier letzte Lieder (‘Four Last Songs’). He composed songs throughout his life, with a substantial body of such works written in adolescence. Strauss’s piano music dates principally from his last years at school, illustrating both his precocity and his understanding of the instrument, which then became so apparent in his songs.
<urn:uuid:58675ddd-fce5-4af9-b17a-be3048ce9c9a>
CC-MAIN-2023-50
https://www.orfeomusic.de/Composer/Detail/26296
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100047.66/warc/CC-MAIN-20231129010302-20231129040302-00793.warc.gz
en
0.937275
782
3
3
2.709176
3
Strong reasoning
Entertainment