text
large_stringlengths 1.33k
150k
| id
large_stringlengths 47
47
| dump
large_stringclasses 95
values | url
large_stringlengths 25
269
| file_path
large_stringlengths 125
155
| language
large_stringclasses 1
value | language_score
float64 0.68
1
| token_count
int64 500
37.4k
| score
float64 2.52
4.56
| int_score
int64 3
5
| reasoning_complexity
dict | topic
large_stringclasses 23
values |
---|---|---|---|---|---|---|---|---|---|---|---|
Wales took a major step forwards for Computer Science on 22nd June with the announcement by Leighton Andrews, the Minister for Education and Skills, at Technocamps at Swansea University that the government would be investing £3m in supporting digital leadership. The announcement has put Wales at the forefront of change in the United Kingdom – meeting a key demand of the Computer Science community to create an infrastructure to support and train teachers.
- The National Digital Learning Council “to provide expert guidance on the use of digital technology in teaching and learning in Wales”
- A bilingual learning platform – provisionally called Hwb – for learners and teachers to “share resources, knowledge and experience”
- A National Digital Collection – a “repository for thousands of curriculum and good practice resources for teachers and learners to upload”
- Encouraging the use of iTunes U – designed to create and share courses
- Establishing “digital leaders” from across Wales
In our ICT reform submission to the (English) Department for Education we argued that the embedding of rigorous new Computer Science curricula in schools requires that resources be identified to build a new teaching infrastructure by training current – and new – teachers. There needs to be recognition that we are introducing a new subject and that unlike other GCSE subjects we will need to train or re-train a new generation of teachers.
Given the gap between the potential removal of Information and Communications Technology Programme of Study in September 2012 and the introduction of the new National Curriculum, we are keen that no momentum is lost with regard to the crucial area of teaching and teaching support. Teacher training requires the active recruitment of ICT and Computer Science specialists – right from primary into secondary school.
In other countries the ‘teacher issue’ is seen as both fundamental and extremely problematic, given the status of the subject as a new discipline and the propensity for teachers to be self-taught. We believe the issue of appropriate qualifications and Continuing Professional Development for Information and Communications Technology and Computer Science teachers to be extremely important – the Royal Society has concluded that there is a shortage of teachers who are able to teach beyond basic digital literacy: only 35% of ICT teachers hold a relevant post-A-level qualification in the subject.
Next Gen Skills believes it is vital that new Computer Science teachers are also equipped with a strong grounding in Computer Science during their training if they do not have existing qualifications in Computer Science. In his speech to BETT, the Secretary of State supported additional Continuing Professional Development for teachers in Information and Communications Technology and Computer Science to ensure educators receive the best possible Initial Teacher Training and Continuing Professional Development in the use of educational technology. He also pledged to work with the Teacher Development Agency to develop teacher training courses in the coming year so that all teachers get the knowledge and experience they need to use technology confidently.
Now that Wales has shown the way, what practical action is planned in England either regionally or nationally? Since January little further has been said about this support. Next Gen Skills will shortly be pressing MPs, London Government and local authorities to develop their support for Computer Science training during the curriculum changes. Surely this is an area for action by the Department for Education. | <urn:uuid:64a1c1ef-9fe8-4f54-89fb-1202427f453d> | CC-MAIN-2019-09 | http://www.nextgenskills.com/wales-leads-on-computer-science-support/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481249.5/warc/CC-MAIN-20190216230700-20190217012700-00037.warc.gz | en | 0.948828 | 656 | 2.71875 | 3 | {
"raw_score": 2.9590704441070557,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Education & Jobs |
Currently the PRSN operates 25 seismic stations in Puerto Rico, Dominican Republic and British & US Virgin Islands. There are two types of stations; analog stations which consist of a sensor, a communication system, batteries and solar panel, and other peripheral electronic equipment. And digital stations, which consist of the equipment mentioned above and also require a digitizer.
There are three main types of sensors: Short Term, Broadband and Accelerometer. The short-period and broadband stations are also known as weak motion stations because this stations detect less intense events very well, but when the events are very strong, it may be saturated. The accelerometers are considered strong motion stations because they are designed to record in scale the events of greater intensity.
Each short period station consists of a Teledyne Geotech S-13 vertical component seismometer, with a natural frequency of 1Hz, a Teledyne Geotech VCO pre-amplifiyer, a REPCO or Monitron radiotransmiter , modulated to a 150 MHz frequency and a 12 V battery that is recharged by solar panels on most of the stations. In early 1992 the SJG seismic station became a 3 component station, with the installation of a Mark L-43D geophone, with a natural frequency of 2Hz and 5500 ohms resistance. The Mona Island and Desecheo stations followed SJG. The broad band station in Cornelia Hill (Cabo Rojo) consists of a Guralp CMG-407 seismometer.
There are three repeating stations: Cerro Piña in Caguas, Cerro Santa Ana in Maricao and Cerro Punta in Jayuya. In these stations the signals from the different seismic stations are combined and relayed forward by means of radion signals, except the segment betwwen Cerro Punta and Cerro Santa Ana, where the information is transmited via microwave.
All the signals from the seismic or receiving stations are received directly in real time at the data acquisition center, located in the PRSN office in the Mayagüez Campus of the University of Puerto Rico. The reception equipment consists of antennas and REPCO radio receivers (810-055-03). A Teledyne discriminator separates all the signal, which are then amplified. The data from 8 of the stations is registered simultaneously on the seismographs. Since September, 1991 the IASPEI data acquisition system (Tottingham y Lee, 1989) started running on a PC 386 and recorded the signals of all the stations, once an eventwas detected by at least three stations. This program replaced the INTEL information system that recorded the events on magnetic tape, and was installed by the Lamont-Doherty Earth Observatory. Along with the IASPEI system, since June 1992, the SOUFRIERE developed by the Seismic Research Unit (SRU) of the University of West Indies in Trinidad, had been recording earthquakes on a separate PC installed solely for that program. Once a day, or at the moment of a significant event, the data was transfered from both computers to a third PC where the processing took place. Due to it's continuous recording format, the SOUFRIERE program detected many small earthquakes that were not recorded by IASPEI. On September 1999, both systems were replaced by the ViSeis continuous recording program. | <urn:uuid:b0409d4a-118c-4852-a069-4fe329fdfb60> | CC-MAIN-2015-11 | http://www.prsn.uprm.edu/English/work/inst.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461216.38/warc/CC-MAIN-20150226074101-00153-ip-10-28-5-156.ec2.internal.warc.gz | en | 0.951566 | 692 | 2.515625 | 3 | {
"raw_score": 2.0072128772735596,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Science & Tech. |
Tobacco smoke contamination lingering on furniture, clothes and other surfaces, dubbed thirdhand smoke, may react with indoor air chemicals to form potential cancer-causing substances, a study found.
After exposing a piece of paper to smoke, researchers found the sheet had levels of newly formed carcinogens that were 10 times higher after three hours in the presence of an indoor air chemical called nitrous acid commonly emitted by household appliances or cigarette smoke. That means people may face a risk from indoor tobacco smoke in a way that’s never been recognized before, said one of the study’s authors, Lara Gundel.
Previous research has shown that secondhand smoke, which is inhaled by nonsmokers exposed to fumes from cigarettes, raises the risk of cancer and heart disease. More research is needed to identify the potential health hazards of thirdhand smoke, Gundel said. Overall, tobacco use causes 20 percent of all cancer deaths, according to the study published in Proceedings of the National Academy of Sciences.
A previous study, published in the journal Pediatrics in January 2009, found residual tobacco smoke is deposited on furniture, carpeting and clothing and coined the phrase “thirdhand smoke.”
The current study found that when the residue from tobacco smoke settled on indoor surfaces, it mixed with indoor air pollutants to form tobacco-specific nitrosamines, or TSNAs, which are potent cancer-causing substances found in unburned tobacco and tobacco smoke.
The researchers checked for nitrosamine levels by exposing paper to smoke and then to nitrous acid, which is produced by gas ovens and burners that aren’t properly vented and by cars. They also tested the surfaces on the inside of a truck of a heavy smoker.
In both cases they found the reaction between the nicotine in thirdhand smoke and the nitrous acid produced two known and potent nitrosamines. They also found a tobacco-specific nitrosamine that is absent in freshly emitted tobacco smoke.
People, particularly infants and toddlers, are most likely exposed to these carcinogens by either inhaling dust or by skin contact, the authors said. Using fans and opening a window doesn’t help eliminate the hazards because most of the nicotine and other substances from burning cigarettes aren’t found in the air, but are absorbed by surfaces, Gundel said.
“Buildings, rooms, public places should be 100 percent smoke free,” she said. “Replace nicotine-laden furniture, carpets and curtains. Nicotine absorbs into these materials. The stuff that’s imbedded can continue to come to the surface.”
The researchers are trying to determine how long these nitrosamines may last as a result of the interaction of thirdhand smoke and the indoor air pollutant, nitrous acid. They are also looking to develop ways to track exposure to nitrosamines.
“We know that these residual levels of nicotine may build up over time after several smoking cycles, and we know that through the process of aging, thirdhand smoke can become more toxic over time,” said study co-author Hugo Destaillats, a chemist with the Indoor Environment Department of the Berkeley national lab’s Environmental Energy Technologies Division.
The study was sponsored by the University of California’s Tobacco-Related Disease Research Program.
If you or a family member has been injured because of the fault of someone else; by negligence, personal injury, slip and fall, car accident, medical malpractice, trucking accident, drunk driving, dangerous and defective drugs, bad product, toxic injury etc then please contact the Dallas Texas Personal Injury Attorney Dr. Shezad Malik. For a no obligation, free case analysis, please call 888-210-9693 or Contact Me Online.
Legal News Tags, Personal Injury attorney, Dallas Personal Injury law firm, Dallas Personal Injury attorney, Austin Personal Injury attorney, Fort Worth Personal Injury attorney, Southlake Personal Injury attorney, Keller Personal Injury attorney, Grapevine Personal Injury attorney, Denton Personal Injury attorney, Arlington Personal Injury attorney, Lewisville Personal Injury attorney, Flower Mound Personal Injury attorney, San Antonio Personal Injury attorney, El Paso Personal Injury attorney, Texas Personal Injury attorney, lawyer Personal Injury, Personal Injury lawsuit, Personal Injury lawyer. | <urn:uuid:46896a3d-90eb-45e4-bb7f-e8d07c139e77> | CC-MAIN-2018-09 | https://www.dallasfortworthinjurylawyer.com/2010/02/thirdhand_smoke_forms_cancerca.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00611.warc.gz | en | 0.936613 | 878 | 3.578125 | 4 | {
"raw_score": 2.8158392906188965,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
MYOMECTOMY DEFINITIONA myomectomy is a procedure that will help improve a woman’s chances of getting pregnant. It is the surgical process of removing fibroids from the uterus so the uterus is kept in place, making pregnancy more likely.
MYOMECTOMY PURPOSEA myomectomy is primarily done to help women get pregnant. Although it does not guarantee a positive result every single time, it is a nice move to improve one’s chances of conception. Aside from trying to make a woman pregnant, myomectomy is also recommended for women with a case of anemia or pain in the pelvis, which are not relieved by medication therapy.
MYOMECTOMY RISKSAs with any surgical procedure, myomectomy comes with risk of bleeding and infection. To reduce the risk of excessive blood loss, a medication therapy is suggested to help shrink fibroids.
MYOMECTOMY PREPARATION REQUIRED
Before myomectomy, you should consult your doctor to talk about the risks, what you need to do before, during, and after the procedure. You must be enlightened about what to expect, actually. You may also be given special instructions on drug, food, and beverage intake. Follow those instructions to keep you safe.
MYOMECTOMY PROCEDUREThere are several ways that myomectomy is done, depending on the size of the fibroids to be removed, their location, and their numbers. Hysteroscopy is a process where a lighted instrument is inserted into the vagina to reach out to the uterus. Laparoscopy is a process where a small incision is made in the abdomen. Laparotomy is a process where a larger incision is made in the abdomen.
MYOMECTOMY COMPLICATIONSLike any surgical procedure, infection is a known risk factor if you have myomectomy. Rare complications of the procedure include injuries to the bowel or bladder and uterine rupture during pregnancy or delivery.
MYOMECTOMY SIDE EFFECTSUterine incision may sometimes cause infertility due to scarring but it happens very rarely.
MYOMECTOMY RESULTSMyomectomy will help relieve bleeding from fibroids and pelvic pain. If you are having the procedure to get pregnant, make sure to try to conceive immediately after your recovery. There are chances that your fibroids may grow back. | <urn:uuid:89931440-06fe-4c51-a0e8-cdb0dd4ca0b5> | CC-MAIN-2017-26 | http://www.medicalook.com/tests/Myomectomy.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319992.22/warc/CC-MAIN-20170623031127-20170623051127-00546.warc.gz | en | 0.939096 | 506 | 3 | 3 | {
"raw_score": 0.8656166195869446,
"reasoning_level": 1,
"interpretation": "Basic reasoning"
} | Health |
Five years ago, gravitational waves were not yet discovered by mankind.
Now, the observations are flowing at an astonishing speed. In the first six months of last year, LIGO-maiden cooperation averaged 1.5 Gravitational wave Events per week.
From April 1 to October 1, 2019, the upgraded LIGO and maiden interferometers detected 39 new gravitational wave events: from massive collisions between neutron stars or from the shock wave over time. Black holes. In all, the Gravitational-Wave Transient Catalog 2 (GWTC-2) now boasts 50 such events.
This gave us the most complete census of black holes in our toolkit, representing a series of black holes never discovered before, and can reveal the unplugged depths before the evolution and the afterlife of binary stars.
“Gravitational-wave astronomy is revolutionary – revealing to us the hidden lives of black holes and neutron stars.” Said astronomer Christopher Berry Member, LGO Scientific Combination (LSC) at Northwestern University.
“In just five years, we did not know that binary black holes have more than 40 catalogs. The third observation run led to more discoveries than ever before. Combining these with previous discoveries paints a beautiful picture of the rich diversity of the universe.”
You may have already heard of some new discoveries made from the observation race.
GW 190412 (named for the date of discovery of gravitational wave events) The first black hole with an unmatched mass between two black holes collided; All other previously discovered black hole collisions involved more or less equal-mass binary.
GW190425 is thought to be from a collision between two neutron stars, the second of which was discovered (the first in August 2017).
GW 190521 finally confirmed the existence of a ‘middleweight’ class of black holes, between galaxies and supermassive behemoths.
GW 190814 was the first collision involving an object in a ‘mass gap’ between neutron stars and black holes.
“So far, the third observation race of Lego and the Virgin has caused many surprises.” Said astronomer Maya Fishback Northwestern University, L.Sc.
“After the second observation run, I thought we would see the full spectrum of binary black holes, but the landscape of black holes is much richer and more diverse than I thought. I’m excited to see what future observations will teach us.”
That’s not all the new data hole has to offer. The other two events, GW 190426_152155 and GW 190924_021846, are unusually distinct. Yes, those names are too long: when we find more and more events, the date will not be enough to identify them, so the new naming convention will include time in UTC.
“One of our new discoveries, GW190426_152155, may have merged a black hole with six solar masses into a neutron star. Unfortunately the signal is dim, so we are not entirely sure.” Said astronomer Sergui Ossokine Of the Albert Einstein Institute in Potsdam, Germany.
“GW 190924_021846 Of course, this is from the fusion of two of the lightest black holes we’ve ever seen. One has six solar masses, the other nine suns.
The new populations of black holes and neutron star mergers are described in four print papers.
The first paper Lists 39 new events. The second paper Reconstructs the mass and spin distribution of the 47 fusion events found in the GWTC-2 catalog, and estimates the rate of black hole and neutron star collisions. Third paper Looks hard for gamma-ray bursts associated with merger events (none found). And Fourth Paper Evaluates data contrary to the forecasts of General relativity; Spoiler maintains complete general relativity.
Overall, the new collection of merger events is not a way to learn about collisions. It provides a way to study black holes directly, as they do not emit any detectable radiation – which is notoriously difficult to investigate.
Thanks to gravitational waves, we know more about these objects than we did a year ago. It goes from here to snowball.
“The merging of the black hole and the neutron star binary is a unique laboratory.” Berry said.
“We can use them to study both gravity – so far Einstein’s general relativity has succeeded in all experiments – and the astronomy of how giant stars live. | <urn:uuid:d18980f5-1869-463e-819e-7c31f567a8d0> | CC-MAIN-2020-50 | https://swordstoday.ie/astronomers-have-discovered-39-new-gravitational-waves-in-just-six-months/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00301.warc.gz | en | 0.920912 | 919 | 3.25 | 3 | {
"raw_score": 2.9448792934417725,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
ISIS in Perspective
Americans, following a long tradition of finding monsters overseas to destroy, are now focusing their attention and their energy on a relatively new one: the group variously known as ISIS or ISIL or the Islamic State. The group has become a major disruptive factor in the already disrupted internal affairs of Iraq and Syria, and it is legitimately a significant object of concern for U.S. policy as far as instability and radicalism in the Middle East are concerned. The outsized role that this group has come to play in discourse about U.S. foreign policy, however—including hyperbolic statements by senior officials—risks a loss of perspective about what kind of threat it does or does not pose to U.S. interests, and with that a possible loss of care in assessing what U.S. actions in response would or would not be wise.
Several attributes of ISIS have repeatedly and correctly been identified as measures of the group's strength, and aspects of the group's rise that are worthy of notice. These include its seizure of pieces of territory in both Iraq and Syria, acquisition of financial resources, and enlistment of substantial numbers of westerners. Although these are impressive indicators of the group's success, none of them is equivalent to a threat to U.S. interests, much less a physical threat to the United States itself—at least not in the sense of a new danger different from ones that have been around for some time. Money, for example, has never been the main determinant of whether a group constitutes a such a danger. Terrorism that makes a difference can be cheap, and one does not need to rob banks in Mosul or to run an impressive revenue collection operation in order to have enough money to make an impact. Even a terrorist spectacular on the scale of 9/11 is within the reach of a single wealthy and radically-minded donor to finance.
The involvement of western citizens with terrorist groups has long been a focus of attention for western police and internal security services. To the extent this represents a threat, it is not a direct function of any one group's actions or successes overseas, be they of ISIS or any other group. Several patterns involving westerners' involvement with foreign terrorist groups are well established. One is that the story has consistently been one of already radicalized individuals seeking contact with a group rather than the other way around. If it isn't one particular group they seek out, it will be another. A further pattern is that, despite frequently expressed fears about westerners acquiring training overseas that they then apply effectively to terrorist operations in the West, this hasn't happened. Faisal Shahzad and his firecracker-powered attempt at a car bomb in Times Square illustrate the less ominous reality. Yet another pattern is that apart from a few westerners whose language skills have been exploited for propaganda purposes, the westerners have become grunts and cannon fodder. They have not been entrusted with sophisticated plots (unsuccessful shoe bomber Richard Reid being the closest thing to an exception), probably partly because of their evident naiveté and largely because of groups' concerns about operational security and possible penetration.
The control by a group of a piece of territory, even if it is mostly just sand or mountains, is what most often is taken mistakenly as a measure of the threat a group poses, and this phenomenon is occurring in spades with ISIS. Probably seizure of land is interpreted this way because following this aspect of the progress of a group is as simple as looking at color-coded maps in the newspaper. The history of terrorist operations, including highly salient operations such as 9/11, demonstrates that occupying some real estate is not one of the more important factors that determine whether a terrorist operation against the United States or another western country can be mounted. To the extent ISIS devotes itself to seizing, retaining, and administering pieces of real estate in the Levant or Mesopotamia—and imposing its version of a remaking of society in those pieces—this represents a turn away from, not toward, terrorism in the West. Significant friction between ISIS (then under a different name) and al-Qaeda first arose when the former group's concentration on whacking Iraqi Shias was an unhelpful, in the view of the al-Qaeda leadership, digression from the larger global jihad and the role that the far enemy, the United States, played in it.
Traditionally an asset that non-state terrorist groups are considered to have, and a reason they are considered (albeit wrongly) to be undeterrable is that they lack a “return address”. To the extent ISIS maintains a mini-state in the Middle East, it loses that advantage. Any such mini-state would be more of a burden to the group than an asset, beyond whatever satisfaction the group gets from installing its warped version of an Islamic order in its little piece of land. Maintaining and exerting power in the mini-state would be a difficult, full-time job. The place would be a miserable, ostracized blotch on the map with no ability to project power at a distance. It would be a problem for the immediate neighbors, and even more of one for the governments out of whose territories the mini-state had been carved, but its existence would not make ISIS any more of a threat to the United States than it otherwise would be. | <urn:uuid:7a9fd651-f14c-493d-aaa1-76ff879d7216> | CC-MAIN-2015-22 | http://nationalinterest.org/blog/paul-pillar/isis-perspective-11150 | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927427.57/warc/CC-MAIN-20150521113207-00095-ip-10-180-206-219.ec2.internal.warc.gz | en | 0.968027 | 1,088 | 2.609375 | 3 | {
"raw_score": 2.990220069885254,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Politics |
« more RISD stories
Inventive Tools Spark Innovative Designs
fall, as RISD’s newest students begin the creative rite of passage known as Foundation Studies, they tackle a
series of three-dimensional design challenges in their Spatial Dynamics studios.
Last fall Senior
Critic Deborah Coolidge MFA 80 CR had an unusual challenge waiting for students
in her Spatial Dynamics sections: design
and build a hand-held wooden tool that can be used to pick up an egg, move it,
break it over a bowl and then beat it.
The resulting egg
crackers—some of which are on display this spring at the President’s House at
132 Bowen Street—are functional and creative flights of fancy that beautifully capture
principles of design and engineering. Referencing
everything from a dentist’s drill to ancient weaponry to the elegant beak of a
bird, the egg crackers demonstrate the power of the creative process in
bringing imaginative solutions to basic engineering problems.
“The beauty of this
assignment is that all of the tools perform the same series of tasks,” Coolidge
says. “But the students go off in very, very different directions and the end
results all reflect something about their personalities or creative interests.”
primary purpose of the assignment is to encourage students to explore the
properties of wood as a material, Coolidge explains. This calls for them to saw,
shape and make joints as they not only design but also build functional objects.
And though the project primarily focuses on design, it also requires ingenuity
and experimentation as students work through the mechanics of levers and
pulleys and the geometry of wedges and inclined planes.
RISD’s Dean of
Continuing Education Brian Smith, a trained engineer who is a natural
advocate for the college’s STEM to STEAM initiative, notes that
the tools created in Coolidge’s studio beautifully illustrate the inextricable
links between art and science that have long fueled invention and innovation.
“People don’t talk
about the fact that Samuel Morse, who invented the telegraph and Morse
code, was an extremely gifted painter, or that Rufus Porter, the man who
founded Scientific American magazine, was a muralist and portrait
artist,” Smith says. “I think one of the important questions we have to ask is:
At what point did art split from science? Because it wasn’t always this way.”
Smith finds the
intricate gears and pulleys Foundation students created for their egg crackers in
their very first semester to be mind-boggling. In designing solutions to the
problem Coolidge set, students “arrived at mathematical concepts, but in reverse—first
by using their hands, then by getting to the equation,” he says.
that her students quickly discovered they had to use calculators to complete the
two-week project. “They really have to find out what wood can and can’t do,”
she says, and in the process, they’re forced to use math in the same way
that engineers do. But then they push further. “They do a lot of experimenting
and rethinking as they work through these problems,” she says.
The design Benjamin
Duff 15 came up with provides a case in point: his “teeth man” tool more
than delivered on form, function and craftsmanship. And it also offered a welcome
dose of humor, too. With its “jaw” open, Duff’s tool scoops an egg into its “mouth.”
By plunging a rod at the back of the tool into the mouth cavity, he’s able to
smash the egg. And to top it all off, he flips the tool upside down, allowing the
gelatinous egg to ooze out through two nostril holes drilled into the center
“The class went wild
with that one,” Coolidge says of Duff’s demonstration. “But more importantly,
each of the solutions presented showed a real attempt to focus on function through
an understanding of the material and heightened attention to detail and form.”
tags: Foundation Studies
RISD STEM to STEAM initiative | <urn:uuid:28d383a2-47e8-4a7c-804e-a112dd5907f2> | CC-MAIN-2016-44 | http://www.risd.edu/About/News/Inventive_Egg_Tools_Innovative_Designs/?dept=4294968230 | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719566.74/warc/CC-MAIN-20161020183839-00187-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.936229 | 923 | 2.84375 | 3 | {
"raw_score": 2.757545232772827,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Answers created by Sam Birch, a Canadian expert on Titan working at Cornell University and provided to CSS by Bruce Callow.
Titan is orbiting Saturn, which is 10x the distance from the Sun that Earth is. When Cassini was launched, it took ~7 years to get to Saturn. Though Cassini was a large compared to other planetary spacecraft, it is quite small compared to a human spacecraft, which would require all sorts of life support equipment, etc. I would then expect that it takes at least 7 years to get to Saturn with humans, and perhaps longer depending on the orbital requirements for a human mission (i.e., are they allowed to swing close to Venus (close to the Sun) for gravity assists?, etc.)
If Titan has life on the surface, and that is still a big if, it won’t be life like we know it. There is almost no oxygen in the atmosphere and no liquid water so life like planets/animals is not possible. Given that Saturn/Titan are so far away, and Titan has a thick atmosphere, life that requires photosynthesis would also really struggle (there is ~100x less light at the distance of Titan than Earth, and Titan’s atmosphere absorbs a lot of the visible light that plants require). However, just like Europa and Enceladus, Titan has a massive subsurface ocean, that all sorts of microbes and life would be very happy to live in! We have no idea if life is there, but the conditions are more than satisfactory for it to be.
Titan has a pretty stable weather cycle, where the temperature hovers around 91 K (-182 C) every day. It gets a tiny bit warmer in the day than at night, and a few degrees warmer in the summer compared to winter. This is because Titan’s atmosphere is enormous, and so it takes a really long time for heat to transport. One way to imagine this is to imagine the difference between shadow and direct sunlight on Earth, it’s big because the heat capacity of our atmosphere is tiny, so differences in flux from the Sun can be felt as pretty significant. The heat capacity of Titan’s atmosphere is enormous, so it wouldn’t really matter if you stand in direct sunlight or in shadow, or if its nighttime at winter or daytime in summer.
But it does weather, as we have seen a couple really large storms. The difference with Earth is again related to the timescales, as Titan’s atmosphere can hold a lot more liquid than Earth’s. This means that it doesn’t rain as often as the Earth, but when it does rain, it really pours. An example rain storm can be found at this Wikipedia article on Titan’s climate here (https://en.wikipedia.org/wiki/Climate_of_Titan). Titan also has seasons just like Earth because it has an obliquity (tilt) relative to the Sun. This means that the north receives more flux than the south in northern summer, which moves liquids in the atmosphere from pole to pole. Because Saturn orbits so far from the Sun though, the seasons on Titan last ~7 years!
Dutch astronomer Christiaan Huygens discovered Titan on March 25, 1655. At this time he also discovered Saturn’s rings, which he described as “a thin, flat ring, nowhere touching, and inclined to the ecliptic.” In fact, he pointed his newly built telescope (that he built himself, you had to back then) at Saturn to see what was there. Galileo had already made big discoveries at Jupiter, so why not look at the only other large planet out there (Uranus and Neptune hadn’t been discovered yet). His first surprise were the rings, and then he noticed a dot around Saturn, that turned out to be Titan!
The Cassini-Huygens mission explored Titan for ~13 years and really changed everything we know about the moon. Dragonfly was also just picked in July by NASA to go back to Titan and explore on the surface and in the atmosphere. It can do this because it is a dual-quadcopter rotorcraft lander than can fly to multiple spots on Titan, taking advantage of it’s super dense, stable atmosphere. Imagine a rover like those on Mars, but with rotors and able to fly 100’s of meters at a time. This is a really exciting opportunity, and if you want more information, go here (https://dragonfly.jhuapl.edu/index.php)
Titan is the second largest moon in the solar system, at 5150 km in diameter. For the longest time it was actually the largest moon in the solar system as we didn’t know the extent of the atmosphere. As it turned out, the atmosphere was massive, and so the solid body size ended up being second to Ganymede, Jupiter’s largest moon, by just 119 km. For comparison, our own moon is just 3474 km in diameter, so Titan would appear quite large if it was where the moon was. And for comparison to the planets, Titan would be the second smallest planet, slightly smaller than Mars (6779 km), but quite a bit larger than Mercury (4879 km).
The Canadian Space Agency isn’t really involved with studying Titan. Most of their efforts are concentrated on the International Space Station, Lunar science, and a bit of stuff on Mars. Hopefully they will look farther out in the not too distant future though!
Sam Birch is a Research Associate at the Cornell Center for Astrophysics and Planetary Sciences at Cornell University. He is an expert on the evolution of the polar landscapes of Titan, Saturn’s largest moon. Sam recently answered questions from students from the Atlas Learning Academy in Airdrie Alberta.
This Q&A was arranged by Bruce Callow, a Calgary teacher based in Costa Rica who does outreach work on behalf of NASA. Bruce is interested in how liquids interact and flow on Titan’s surface, and how they modify the landscape. We know that Titan has rivers and seas, but we don’t really know how they evolve and form, given that everything is so different on Titan. You might expect it’d be the same as on Earth, but everything is so different on Titan, that you get some really cool dynamics that don’t occur on the Earth. For example, when rivers hit the sea on Earth, the sediment they carry is distributed across a broad area, forming a delta. On Titan, we don’t really see deltas, and I’m trying to understand why that is. | <urn:uuid:3458af61-5888-4b28-bf23-b0eac310efef> | CC-MAIN-2022-21 | https://css.ca/titan-q-a/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00673.warc.gz | en | 0.960601 | 1,385 | 3.765625 | 4 | {
"raw_score": 2.6688365936279297,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Podcast: Where Do Our Federal Tax Dollars Go?
April 6, 2010
In this podcast, we’ll discuss where our federal tax dollars go. I’m Michelle Bazie and I’m joined by Chuck Marr, the Center’s Director of Federal Tax Policy.
1. Chuck, how are our federal tax dollars spent?
Michelle, for every dollar a taxpayer sends to the federal government about twenty cents is spent on our military and protecting the country. Another twenty cents is spent on Social Security which is the key program that keeps elderly people out of poverty, gives them a decent standard of living, and has been one of the most successful programs in U.S. history. Another twenty cents of our tax dollar goes to health programs. Such as Medicare for elderly people and also some for Medicaid to insure lower-income people. And about a dime of our tax dollar goes to pay interest on the national debt.
2. That’s a little more than two thirds of the spending. What else is there?
That leaves about 30 cents of our tax dollar for all other government services. That’s everything from education investments to help kids go to college, to unemployment insurance and food stamps which are so critical now given the weak economy, to all the important research the National Institute of Health does, to food safety inspectors and all other programs that people think about as the responsibility of government.
3. What about myths regarding federal spending on foreign aid or assistance to vulnerable populations?
Michelle, a major budget myth is that a large share of our tax dollar goes to foreign aid and that is very much not true. Only a very small portion -- less than 1% of the budget -- actually goes to foreign aid.
As far as programs for poor people, there are many important programs that help people who are facing distress, like unemployed workers, kids who need a decent school lunch, and people who are disabled. While these programs are very important, from a budget perspective, they only amount to about 10 cents of the average tax dollar, so they are actually relatively small.
4. Chuck, how have recent economic problems affected tax revenues and government spending?
The economic crisis has made revenues go down and spending go up. And this is by design. The income tax responds to economic weakness by allowing people’s taxes to fall as their incomes drop. And also the Congress has stepped in and cut taxes for average people and small businesses. This is designed to put more dollars in people’s pockets so they will spend it and generate more economic activity.
5. So revenues go down, by design, as you said. Why does spending go up in response to an economic downturn?
Let’s take one example, Michelle. When people lose their jobs, they’ve paid into the system so they get unemployment insurance. And that’s of one of these so-called “automatic stabilizers." One example would be two neighbors, one who works in a factory and one who works in a hardware store. Say the factory worker loses his job, his unemployment insurance allows him to keep spending some money at the hardware store so that hopefully his neighbor doesn’t lose her job.
In addition to unemployment insurance, the government has increased spending on roads and on bridges through the Recovery Act, which is great because the construction industry has been hard hit by this recession.
They also had to step in and save the financial system and actually injected money into the banks and what we’re seeing now is that the banks are paying this money back.
6. So we’re seeing lower revenues and higher spending. How does that affect the deficit?
The deficit during an economic crisis necessarily goes up. It gets larger and that’s what’s happening now. And believe it or not, that’s actually a good thing because what that means is the government is stepping in temporarily and providing some demand, as a “shock absorber” for the economy.
We are much more concerned about long-term deficits, not the temporary deficit during an economic crisis. These long-term deficits will need to be addressed once the economy recovers.
7. What’s the key take-away?
Michelle, many people obviously do not consider paying taxes fun. It’s an obligation as a citizen. But I think that if people step back and think about what that money is doing -- whether it’s providing gear for a soldier, a hip replacement for an elderly citizen, access to college for a kid from a family who has never had someone gone to college -- if you step back and think about it that way, I think that people will feel a little bit better about the taxes they pay and actually proud about the money they are sending in.
Thanks for joining me, Chuck.
For more on the Center’s federal tax policy work, go to CenterOnBudget.org. | <urn:uuid:3a4ab56e-caab-4aad-b5c4-9821e18e8a10> | CC-MAIN-2017-17 | http://www.cbpp.org/research/podcast-where-do-our-federal-tax-dollars-go?fa=view&id=3144 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125841.92/warc/CC-MAIN-20170423031205-00058-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.968102 | 1,021 | 2.75 | 3 | {
"raw_score": 2.8369929790496826,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Finance & Business |
The culprit responsible for the decline of Mexico’s once lucrative jumbo squid fishery has remained a mystery, until now. A new Stanford-led study published in the ICES Journal of Marine Science identifies shifting weather patterns and ocean conditions as among the reasons for the collapse, which spells trouble for the Gulf of California’s marine ecosystems and fishery-dependent economies. It could also be a sign of things to come elsewhere.
“What is happening with the jumbo squid is indicative of larger changes impacting marine organisms and ecosystems across the northeast Pacific,” said the study’s lead author, Timothy Frawley, who was a Stanford graduate student when he conducted the research. “In many respects these squid, with their unique and adaptive survival strategies, function as sentinels of environmental change.” William Gilly, professor of biology in the School of Humanities and Sciences, was senior author of the study.
Also known as the Humboldt squid, these large, predatory creatures are targets of the world’s biggest invertebrate fishery, commercially fished in Peru, Chile and Baja California. In 2008 the Gulf of California jumbo squid fishery employed over 1,500 fishing vessels and was the fourth largest fishery in all of Mexico. By 2015, it had completely collapsed, and as of yet shows no sign of recovery.
To better understand the factors that drove the collapse and have inhibited recovery, the team compiled official fisheries records and reports, oceanographic data obtained from satellites and instruments deployed at sea, and biological measurements of over 1,000 individual squid. Comparing these data sources over time, the research team identified and described changes in ocean habitat that coincided with reductions in squid size, life span and fisheries productivity.
They found that long-standing currents and circulation patterns within the Gulf of California have shifted over the past decade. Previously, warm-water El Niño conditions that are inhospitable to the large squid were followed by cool-water La Niña phases, allowing the system to recover and recuperate. In recent years La Niña has been conspicuously absent, resulting in increasingly tropical waters across the region. As these waters warm, cooler, nutrient-rich waters ideal for both the jumbo squid and their prey have become scarce.
In response to new, warmer ocean conditions the squid limit their growth, shorten their life span and reproduce earlier. As a result, smaller, more difficult to catch and less profitable squid have become the norm, shuttering the entire squid fishing industry in the region.
“You can think of it as a sort of oceanographic drought,” Frawley said. “Until the cool-water conditions we associate with elevated primary and secondary production return, jumbo squid in the Gulf of California are likely to remain small.”
Other fisheries at risk
Though careful not to discount overfishing as a potential contributing factor in the collapse of the fishery, the researchers cite the persistence of the small-size squid in the years since fishing has ended as evidence of additional factors at play.
“These results show that our traditional understanding of the dynamics underlying fisheries, their management and sustainability are all at risk when environmental change is rapid and persistent,” said Larry Crowder, study co-author and the Edward Ricketts Provostial Professor of Marine Ecology and Conservation at Stanford Hopkins Marine Station.
The change in squid populations marks a drastic and persistent shift in the larger ecosystem, a shift likely to have significant implications for coastal fishing communities across the region, according to the study’s authors. In a related study, Frawley examines the squid fishery to understand the processes through which small-scale fishers perceive and respond to such changes.
“Small-scale fishers in Baja California and elsewhere are increasingly recognizing the value of diversifying what they catch as a means of remaining flexible and resilient,” Frawley said. “For the fishers, as for the squid, being able to adapt in real-time appears essential in the face of increasing climatic and oceanographic variability.”
Source: Stanford University | <urn:uuid:67dee074-6f30-4316-8e15-012cb9ebd4b7> | CC-MAIN-2019-39 | https://www.technology.org/2019/07/19/a-stanford-led-study-finds-warming-waters-drove-collapse-of-mexicos-jumbo-squid-fishery/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00472.warc.gz | en | 0.948211 | 849 | 3.484375 | 3 | {
"raw_score": 3.019120454788208,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Most genetic conditions have no cure, where treatment is available this is more often than not with …
The controversial issue of editing the human genome has been in the news recently, with some groups passionately against its use – wanting a moratorium on research using the technology. Researchers in the UK have called for a debate on the issue with the intention of starting a conversation with the public to understand exactly what society, outside the research community, think on the topic. We thought we’d try to explain a bit about what genome editing actually is, how it might affect you, and what arguments are being made for and against it’s use, as well as outlining our position here at Genetic Alliance UK.
Most genetic conditions have no cure, where treatment is available this is normally with the intention of managing symptoms or slowing deterioration rather than stopping the disease all together. Genome editing technology presents a promising way of addressing the cause not just the symptoms of genetic conditions.
Genome editing is not new - scientists have been working with different techniques to modify genes for 30 years, so why are we talking about it now? The pace at which the technology is advancing, and the accessibility of the technology, puts genome editing high up the agenda for scientist and bioethicists.
Previous methods of genome editing such as zinc finger nucleuses (ZFNs) and TALE nucleases (TALENs) have been clumsy and expensive . The CRISPR-Cas9 method uses RNA (which carries instructions from DNA for controlling the synthesis of proteins) to guide the cas9 enzyme to the precise area of DNA that researchers want to get rid of, or replace. This is much more precise than earlier methods, and is also a much more accessible, inexpensive way of editing genes - meaning that it is much more likely to be used in a clinical context eventually.
The CRISPR-Cas9 technique is currently only used in a research setting and is almost always carried out on animals and animal cells.
In the UK the undertaking of research on human embryos is heavily regulated under the Human Fertilisation and Embryology Act (2008). The Act permits researchers to use donated embryos (usually ‘spare’ embryos left over after a couple has undergone IVF) for research purposes up until the embryo is 14 days old, at which point it must be destroyed. Once an embryo has been altered in any way it is not allowed to be implanted into a woman. To conduct such research the scientist must be given a licence by the Human Fertilisation and Embryology Association (HFEA). To be very clear though, any use of genome editing for reproductive purposes is illegal in the UK, and would require Parliament’s approval before it is possible.
It is hoped that genome editing could have many different applications for patients in the future. In the short term, the most likely application is to better understand human biology. In the longer term, researchers may be able to develop clinical applications.
The first licence to be granted in the UK allowing a research team to genetically alter human embryos using the CRISPR-Cas9 method was issued in January 2016. Researchers at the Francis Crick Institute proposed to modify genes to explore why some women have repeated miscarriages. This could potentially lead to breakthroughs for clinical medicine, but it has been indicated that their data would be used to enhance conventional techniques in IVF, not to begin editing genes for reproductive purposes.
Using the technology to find out more about human biology is just one of four categories that research using the CRISPR-Cas 9 method could belong to. The other three would be around research into the method itself, to try and make the technique more accurate, research to develop treatments for application on humans, who already have a genetic condition, and finally, research into the use of the technique in human reproduction.
If the CRISPR-Cas9 method could be used for reproductive purposes in a safe and ethical manner it could present an exciting opportunity to further eradicate serious genetic disease. But this does not mean that a leap like this has ever been made before. Preimplantation genetic diagnosis (PGD) has been available on the NHS for couples at risk of passing on a genetic condition to their children since 2009 (read more about our work on this here). In February 2015 legislation for the use of mitochondrial donation in IVF was approved by parliament, there is currently a process to formulate regulation for the technology’s use - when this is ready mitochondrial donation will be available to couples through the NHS. Both of these techniques are conducted during cycles of IVF for couples at risk of having children with genetic conditions - allowing them to safely have their own biological children free from conditions that may have been present in their families for generations.
The first significant objection that we hear about the use of genome modification is around the safety of using this type of technology to produce human life. While the CRISPR-Cas9 technique is much more reliable than its predecessors, this does not mean that it is reliable enough to start using in human reproduction.
Editing the human germline means that changes would be passed on to the children of those who were conceived using the technique. Some people find this problematic, as a small mistake could be passed through generations.
It is true that the CRISPR-Cas9 carries potential risks and as yet we do not know the long term impact of using the technique, but this is the case with many new biological techniques. Concerns about the safety and accuracy of the method at this stage are valid. However concerns of this nature are why there has been no application from the scientific community that the editing of genes in human embryos could be used to treat patients or produce life in its current state. Steps to use the technology on humans would not be allowed until it is definitely safe for future generations as well as the individuals being treated.
That changes would pass between generations could, however, be seen as one of the most exciting implications of the technique. The fact that changes will carry down generations could mean the eradication of life limiting and fatal genetic conditions from families that have been affected for generations.
Panic around safety is slightly premature as there are no plans at present to start using genome editing in reproduction. The technology will, however, evolve quickly and could be ready for use relatively soon. This is why it is important that we start looking at the second of the more significant arguments that are made against genome editing, which are made on the grounds of ethical concerns.
With different types of reproductive technology the argument has been made that parents could be offered the opportunity to pick and choose traits in their child, and of course this is scientifically possible. However, it is a far stretch between using a technique like this to create babies free from genetic disease, and to create ‘enhanced’ humans through genetic modification.
We do not see this as a ‘slippery slope to designer babies’. There is an obvious line, already drawn, between the use of reproductive technology for therapeutic purposes, and the use of such techniques for human enhancement. There has been no implication that public opinion has changed to think it is ethical to cross this line. This treatment/enhancement distinction will be fundamental in the way that technology such as this is regulated.
After scientists in China reported successfully editing the gene mutation for beta-thalassemia in a human embryo, a group of American scientists called for a moratorium on using this kind of technology to edit human embryos. They have urged scientists across the world to stop conducting research of this kind.
Over the summer The Hinxton Group and a collection of medical research charities have highlighted the reasons that they think using the CRISPR-Cas9 technique in research is positive. Both of these bodies have pointed to the definitive difference between using genome editing techniques for basic research, and using genome editing techniques for reproductive purposes. The use of genome editing techniques such as CRISPR-Cas9, in research, can further our understanding of processes and disease in relation to specific genes.
The technology may not be safe to use in clinical applications right now, but it may be revolutionary in research – to help inform our understanding of human biology. The wealth of knowledge that can be gained from studying genes in this way, without any intention to use it for reproductive purposes, is invaluable to our understanding of genetic conditions. To reject this knowledge would be a mistake - and thus a moratorium is unnecessary.
We have surveyed our members and supporters, to explore the patient view of genome editing. As potentially the group with the most to gain from these scientific advances, we think it is paramount that the genetic and rare disease communities are able to voice their opinions in this debate. The survey has been produced as part of the Neuro-Enhancement: Responsible Research and Innovation (NERRI) project - a report of these findings will be published in due course. | <urn:uuid:18d790ba-7212-42a0-8316-0bd7d5503764> | CC-MAIN-2017-43 | http://gastaging.geneticalliance.uk/our-work/medical-research/genome-editing-what-does-it-mean-for-patients/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00369.warc.gz | en | 0.958701 | 1,813 | 3.203125 | 3 | {
"raw_score": 3.0621180534362793,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
America's largest aeronautical manufacturer, Curtiss-Wright Corporation, is engaged today in an all-time record production of warplanes, engines and propellers for the Allies. This effort is reflected in plants scattered over seven states.
Over every battlefront its equipment is spearheading the attack on the enemy. The Curtiss (P-40) Warhawk, work-horse of the fighter types, now qualifles as a global fighter because of its wide operation; the Commando (C-46) originally designed as a luxury airliner is today the backbone of the aerial Burma Road and other ATC-operated life lines; the Helldiver (SB2C) some time ago made its debut with the Navy against the Japs, is proving an extremely destructive weapon. Others the Seagull, the Falcon, the Jeep perform important but less conspicuous war roles.
Company-built Cyclone engines power a large number of aircraft including some of Uncle Sam's hardest-hitting warplanes: the Boeing (B-17) Flying Fortress, Douglas (SBD) Dauntless dive-bomber, North American (B-25) Mitchell bomber, Martin Mars flying boat, Lockheed (C-69) Constellation and the new Boeing (B-29) Superfortress. Curtiss electric propellers are installed on the Lockheed (P-38) Lightning, Bell (P-39) Airacobra, Curtiss (P-40) Warhawk, Republic (P-47) Thunderbolt, North American (P-51) Mustang, Martin (B-26) Marauder, the Martin Mars flying boat, and other lesser lights.
Behind this Cyclops production according to the last report made public by a government committee, it was surpassed in volume only by General Motors is a story which typifies the American aviation industry's overnight conversion from peacetime to wartime operation. From four plants employing about 8,000 persons in 1939, the organization has mushroomed to approximately 18 plants with over 180,000 workers, all engaged in turning out the tools of war.
Founded by the Wright Brothers and Glenn L Curtiss and now under the presidency of C W Vaughan, the company is comprised of four units: the Curtiss-Wright Airplane Division, Wright Aeronautical Corporation (engines), Curtiss-Wright Propeller Division, and the Curtiss-Wright Development Division.
Output of over 20,000 planes since 1938 indicates the current concentration on production of war equipment, and in addition, the company has accelerated its development and research program. In the last year, development of the 2,200-hp Wright Cyclone engme for high-altitude planes like the B-29 Superfortress has been announced, officials have disclosed development of an engine-cooling fan assuring an advantage over Axis planes; two propeller test cells, capable of testing engines of over 5,000 hp and propellers 30' in diameter, which are the largest of their type in the world, have been erected; a giant research laboratory is now under construction.
|Commando (C-46)||Transport||Army-Navy||Pratt & Whitney Wasp|
|Helldiver (SB2C)||Dive bomber||Navy||Wright Cyclone|
|Helldiver (A-25)||Attack bomber||Army||Wright Cyclone|
|Owl (O-52)||Observation||Army||Pratt & Whitney Wasp|
|Falcon (SNC-1)||Trainer||Navy||Wright Whirlwind|
As challenging and varied as their names are the airplanes which roll off the assembly lines of the company's Airplane Division. Warhawk Commando Helldiver Jeep Owl Falcon, a motley group with each plane tailor-made and streamlined for its special job; each in its own way winged death for the enemy.
Since the earliest whirrings of airplane wings, Curtiss planes have figured prominently in the history of aviation, In 1911 Curtiss sold the Army its second airplane (the first was purchased from the Wrights in 1909). The Navy purchased its initial air and water flying machine, the Curtiss Triad, in the same year. Some months before, Eugene Ely had made the first flight from a warship in a Curtiss airplane.
Curtiss also laid the groundwork for the ship catapult when, in a plane fitted with floats, he landed beside the USS Pennsylvania, was hoisted aboard by crane. He and his plane were lowered to the surface and a successful takeoff was made. Some time later, flying a Curtiss plane, Ely was catapulted from a battleship another first!
The original amphibian planes were developed by Glenn Curtiss, as were the first flying boats. The early flying boats were used as commercial airline carriers and for military duty here and abroad, led to the development of the now-famous NC series, onw of which (the NC-4) made the first air crossing of the Atlantic Ocean in 1919.
Much of the pioneering done by Curtiss is standard practice in the aircraft industry today. It was the early Curtiss organization which developed the aileron, or movable lateral control surface, on of three elemental controls in directional flight today. The tricycle landing gear was first used on a Curtiss plane in 1911.
Other conspicuous Curtiss firsts include: plywood fuselage using laminated strips applied over a mold at a 45° angle to increase strength, twin-engine transport, first soundproof airline cabins, sleeper plane and dual-control airplane, Among Curtiss developments which may found on practically every plane built today are the sliding-type canopy or cockpit cover, adjustable and separately-hinged rudder pedal with brakes, self-centering tailwheel and, on planes using radial engines, the speed ring to cut down head resistance.
The wing radiator, successful flotation gear for land planes, and monoplane torpedo carrier using an internally braced wing all equally important in their day also belong on the company "original" list.
To return to the course of Curtiss history: Pre-war pioneering led to wartime production. During World War I the plant turned more than 5,000 JN Jenny trainers one of the first great mass production accomplishments in the aircraft industry. These sturdy litle Jennys were probably the favorite pin-up girls of World War for practically the entire US Air Corps earned their wings in JN biplanes.
Before the United States entered World War I Great Britain placed a $750,000 order for Curtiss flying boats, and to flll the order Glenn Curtiss requested an advance of $75,000. The British advanced 75,000 pounds (five times as much as Curtiss had asked). Before long backlog of $14,000,00O in Allied orders had been built up.
It was during that expansion period that Curtiss transferred his manufacturing facilities to Buffalo, for the Curtiss Aeroplane and Motor Company.
With the United States in the war, H and F flying boats were tunedr out for the Navy. In addition to the Buffalo plant, Curtiss established experimental laboratory at Garden City, LI, and a factory in Toronto, Canada.
The Armistice, which temporarily ended mass production, did not halt the development of new, faster, reliable planes both military and commercial. In 1929, the Curtiss properties, Wright Aeronautical Company organized by Orville Wright, and other aviation manufacturing companies merged to form the Curtiss-Wright Corporation. Toward the end of the twenty-year interval of peace, as the red tide of a new war rose higher and threatened to engulf the world, the organization was already well under way on war-geared expansion plans.
Since 1939 the Airplane Division has multiplied more than forty times its weight of airframes produced. In 1937, the area of division plants in Buffalo and St Louis was slightly more than 600,000 square feet, employees numbered slightly more than 2,000. Now, in the third of the war, the division has six plants in four cities, combined area. about 9,300,000 square feet of space. Employment has passed the 85,000 mark.
Expansion began in St Louis, November 28, 1940. That plant was literally built around the original factory. Department by department the old plant gave way to the new, so that in April of 1942, and without production stoppage, the entire project was completed.
Columbus, Ohio became the site or another new building. Ground was broken there in January, 1941, and by April, 1942, this new unit was in full-scale operation.
The new building at Buffalo airport stands on a site which three years ago was occupied by a chicken farm. A different kind of bird is hatched there now! In 1942, when Curtiss projected the C-76 Caravan plywood cargo plane, a new plant went up at Louisville, Kentucky.
As each new factory was built, sewers, roads, power plants, gas lines had to be installed, thousands of jigs, fixtures and machines were purchased and put in place. There were problems of design, tooling, accounting, stock and production control, and, above all, employment.
At the present time 38.6% of the total employees in the Airplane Division are women.
A well-rounded educational program embracing all phases of aircraft construction and plant management has been developed by the Division. The Engineering Cadette Training program which originated in February, 1943, has already graduated over 700 women from an intensive ten-month course. These women now hold primary engineering positions in the Division's five plants, relieve graduate engineers for more technical work. Curtiss furnishes tuition, room and board, pays each Cadette $10 a week duing the course.
Instruction in Curtiss-Wright shops, training school for new employees, Engineering and Managemtnt Institute sponsored jointly by the company and Cornell University to train selected employees for supervisory posts, Service School where maintenance men are trained for field service, training in factory and office management at Harvard University Business School, various technical courses offered by high schools under Curtiss guidance, all help facilitate the problem of securing skilled labor.
Company policy does not favor concentration on one particular type of plane but calls for the manufacture of various types required by Army, Navy and Marine Corps.
From January 1, 1938, to December 1, 1943, Curtiss-Wright's Airplane Division built 16,795 airplanes. Over 6,000 of these rolled off the productnon line before Pearl Harbor. This record of production was achieved despite numerous changes in design specifications necessitated by combat and operational experience. In addition to aircraft of its own design, Curtiss also built many Republic P-47 Thunderbolts.
First in the long line of Curtiss pursuit planes was the famous Kirkham fighter which at the time of Armistice was still undergoing final tests. It established several altitude and speed records, climbing to 30,300 feet in 1919, traveling 140 mph in 1920.
In 1924, Lt Russell L Maugh at the controls of a Curtiss PW pursuit, made his coast-to-coast dawn-to-dusk flight, covering 2,450 miles in 17 hours, 52 minutes. The PW-8 was the first Curtiss Pursuit to be labeled the Hawk, a name applied to present day fighters.
During the next six years the Hawk progressed through the series P-1, P-2, P-3, P-5 and P-6. By the end of 1933, Curtiss pursuits had proved so satisfactory that the US Army purchased 291 of them for its fighter squadrons. Between 1933 and 1936; pursuit planes progressed through Types I, , II, III and IV. Between those years Curtiss sold more than 100 Hawks to China for her war with Japan. The Hawks also became standard military equipment in Turkey, Bolivia, Siam, Colombia, Argentina and China. In 1935, as Hitler came to power, the company submitted to the Army a new pursuit ship, Design 75. This became the P-36 which matched the German Me-109 in speed but was slower than the British Hurricane and Spitfire. At the outbreak of World War II, France had Hawk 75As and in September of 1939 a Hawk 75A shot down the first German plane to fall over French soil in this war. After the fall of France the British took over the balance of the French contract for 75As, assigned the ships to the Ethiopian campaign under the name Mohawks.
The P-40 series was developed from the Hawks. The value of the P-40 in the early stages of the war was inestimable. In those crucial days it was the only American fighter plane geared to quantity production. Faster and more powerful Curtiss pursuits have since come off the assembly line from the Mohawk through the Tomahawk and Kittyhawk to the present great Warhawk.
Ever since Glenn Hammond Curtiss was awarded the first naval aircraft contract, the organization he founded has held a commanding position in the development of Navy planes. The art of dive bombing is essentially an American innovation and it was with Curtiss planes that our Navy perfected the technique. As early as 1924, the Navy began experimenting with dive bombers. In 1928, Curtis delivered the F8C2, first of the famed Helldiver line. In the fifteen years since the F8C2 fourteen types of dive bombers, more than 550 airplanes, exclusive of the powerful new SB2C Helldivers, can be chalked up to the Airplane Division.
The A-25 attack bomber, Army version of the Navy Helldiver, has a formidable array of machine guns for offense and defense. It is the latest in a line of Curtiss attack bombers which dates back to the A-3, first plane built for the Army as a ground attack ship in 1926.
Current Curtiss scout observation plane is the SO3C or Seagull, but some of the older SOC models which were last made for the Navy in 1938, are still in service. In fact, one of the old SOC biplanes participated in a recent much-publicized exploit over Sicily.
Under lend-lease arrangements large numbers of the SO3C have been turned over to the British Navy, which calls it the Seamew. British camouflage and insignia are applied to the ships in the Ohio plant, with the American star stenciled over British markings, to be removed when the plane gets into British hands.
A Curtiss trainer of this war is the rugged, twin-engined AT-9 Jeep, development of a quarter century of experience. The Army uses it in transition training, to bridge the long and difficult step from a single-engine, low-speed training plane to a multi-engine bomber or fast pursuit. The AT-9 duplicates as closely as possible the complex operation of the modern bomber. For training hundreds of its fliers the Navy employs the SNC, a basic combat type. Contracts for both the AT-9 and the SNC were completed some time ago.
Curtiss started building observation planes for the Army in 1924, the XO-1 Falcon being the first ship. When the USAAF wanted a new plane for World War II for observation purposes, engineers designed the O-52, which functions as part of the ground force, not the air force, since its use includes, in addition to direct observation work, mapping, artillery spotting, troop placement, and coastal patrol duty.
Prototype of the C-46 Commando, a plane carrying the commercial designation CW-20 was test flown for the first time at St Louis on March 26, 1940. After exhaustive tests, the Army Air Forces purchased it, sent it to England. Christened the St Louis in honor of the city of its origin, it replaced six transport planes when it started on the Malta run.
Production of transport airplanes on the largest scale ever attempted in peace or war was launched in 1943 by Curtiss and Higgins Aircraft, Inc,incooperation with the USAAF. Plane selected was the Commando.
As a complement to the C-46, a long-range transport, early in 1942 engineers projected the C-76 Caravan, the war's first airplane designed and made exclusively for military cargo purposes.
Behind the Commando and Caravan lies a history of many years' experience in the production of commercial airplanes. The story of the company's commercial planes began in 1918 when the US Post Office opened its first airmail route, pressed six converted JN trainers into service.
In 1919 Curtiss' contribution to commercial aviation was the Eagle, first American tri-motor plane. Known as the Inter-City Passenger Carrier, it flew between Los Angeles and San Francisco. A year later a single-engine Eagle, designed to carry ten passengers and ¾ ton of freight at 105 mph, set new records.
The MF pusher type flying boat came next. Two of them saw service with Gulf Coast Air Line, Inc, carrying mail and passengers. The Post Office Dept, in 1924 invited the aircraft industry to design a new type airmail plane and the Carrier Pigeon was the result. With its deep fuselage, massive wings, it resembled a commercial truck, was used on a mail route between Chicago and Dallas.
Curtiss entered the large-plane field in 1929 with the Condor, designed for heavy transport work. This type operated on a Western route, entered coast-to-coast mail service in 1929, became the first sleeper plane in history on American Airways routes in 1934.
|Airplane||Engine||Hp per |
|Boeing||Superfortress (B-29)||4 Cyclone 18||2,200|
|Flying Fortress (B-17)||4 Cyclone 9||1,200|
|Curtiss||Shrike (A-25)||1 Cyclone 14||1,700|
|Douglas||Havoc (A-20, P-70)||2 Cyclone 14||1,600|
|Bolo (B-18)||2 Cyclone 9||830|
|Dragon||2 Cyclone 14||1,600|
|Skytrooper (C-49, C-53)||2 Cyclone 9||1,200|
|Dauntless (A-24)||1 Cyclone 9||1,000|
|Lockheed||Lodestar (C-60)||2 Cyclone 9||1,200|
|Constellation (C-69)||4 Cyclone 18||2,200|
|Hudson (A-29, AT-18)||2 Cyclone 9||1,200|
|Ventura (B-37)||2 Cyclone 14||1,700|
|Martin||Baltimore (A-30)||2 Cyclone 14||1,600|
|North American||Mitchell (B-25)||2 Cyclone 14||1,700|
|Yale (BT-9)||1 Whirlwind 9||400|
|Consolidated Vultee||Valiant (BT-15)||1 Whirlwind 9||400|
|Vengeance (A-35)||1 Cyclone 14||1,700|
|Brewster||Buccaneer (SB2A-1)||1 Cyclone 14||1,700|
|Curtiss||Helldiver (SB2C-1)||1 Cyclone 14||1,700|
|Falcon (SNC-1)||1 Whirlwind 9||450|
|Douglas||Dauntless (SBD-3)||1 Cyclone 9||1,000|
|Skytrooper (R4D-2)||2 Cyclone 9||1,200|
|Havoc (BD-1)||2 Cyclone 14||1,600|
|Eastern||Avenger (TBM-1)||1 Cyclone 14||1,700|
|Grumman||Avenger (TBF)||1 Cyclone 14||1,700|
|Duck (J2F-5)||1 Cyclone 9||950|
|Lockheed||Lodestar (R5O-4)||2 Cyclone 9||1,200|
|Hudson (PBO-1)||2 Cyclone 9||1,200|
|Martin||Mariner (PBM-3)||2 Cyclone 14||1,600|
|Mariner (PBM-5)||2 Cyclone 14||1,700|
|Mars (JRM-1)||4 Cyclone 18||2,200|
|North American||Mitchell (PBJ)||2 Cyclone 14||1,700|
|NAF||N3N-3||1 Whirlwind 7||235|
In the first 30 months following this nation's entry into war, more than 225,000,000 hp in the form of Wright Cyclone and Whirlwind aircraft, engines came from Wright plants and the Corporation's licensees.
This power has gone into a all over the world. It has played a heavy part in our bombardment program because Cyclones power the Boeing B-17 Flying Fortress, the Douglas A-20 Havoc, the North American B-25, many other types of Army bombers. In recent months a considerable portion of this 225,000,000 hp has been made up of Cyclone 18s of 2,200 hp, which are installed in the Boeing B-29. The Navy, like the Army, has relded heavily on such power and Cyclones turn propellers of the Avenger, made by Grumman and Eastern Aircraft, the Douglas SBD Dauntless, the Curtiss Helldiver, the Martin Mariner and the Martin Mars. In addition to powering more than twenty types of Army and Navy planes in current use, models of this engine are used extensively by Great Britain, Russia, other Allied nations.
Much of the engine power for our ground forces also comes from the seven plants of Wright Aeronautical, and those of its licensees. Whirlwind 9s of 400 hp have played an important role in powering the M-3 General Grant, M-4 General Sherman medium tanks and M-7 105-mm self-propelled Howitzer.
This steady production of power turned out seven days a week the year round, is in strong contrast to the company's situation at the outbreak of World War II. In September of 1939, the first month of this global war, Wright plants produced only 235,000 hp, yet at that time such production was a new record peak. The growth of the company from one main plant plus a foundry to seven gigantic factories and foundries in New Jersey and Ohio, has, of course, accounted for the bulk. of the increase of power produced. Likewise, the production of its engines by Studebaker, Continental Motors, the Dodge Chicago Division of Chrysler and the Naval Aircraft Factory have also helped to swell the quota. At the same time, however, during the war years and during the time of plant expansion, there has been a comparable rise in the power of each of the engine models. The Cyclone 9, which appeared in 1927 at only 525 hp, has now grown to well over 1,200 hp. The Cyclone 14, during the war years, has advanced from 1,500 to 1,700 hp, while the Cyclone 18 has advanced to 2,200 hp. These power ratings are normal takeoff ratings, do not take into consideration the much greater power available to the air forces today due to the water-injection technique now applied to Wright engines after much research in the company's laboratorfes.
Exact figure on floor space and total number of employees are still restricted for reasons of military security but the seven plants alone total more than the entire industry payroll before the war.
Each of the new plants was constructed during the expansion period, presenting personnel managers with the problem of finding thousands of additional workers in areas where the labor market had already been drained. This problem was solved by setting up in vocational training schools courses which would adapt unskilled labor to the operation of automatic or semiautomatic machine tools with which Wright retooled its existing plants and which it installed in new plants. On a schedule of morning, afternoon and night classes, more than 20 vocational schools in New Jersey and Ohio helped train the thousands of Wright workers building engines today. In their courses the men and women studied shop methods, shop arithinetic, blueprint reading, use of micrometers and other measuring gauges, received instruction on machine tools which were duplicates of those they were later to operate.
This program went into action well before America entered the war. After Pearl Harbor, when the needs of the Armed Forces resulted in drafting many shop workers, vocational training courses were adapted to train women. Today, approximately 35% of the employees in the Wright plants are women and the percentage is still gradually climbing higher, although employment is near its peak. More than 17,500 employees have entered the Armed Forces. Women who filled the majority of jobs left vacant by these men now operate lathes, milling machines, drill presses, grinders, burring machines, hold down a number of jobs in the foundries where their most important work is core making. A number of teenage workers have also gone to work for the company, the majority holding messenger, clerical and stenographic jobs, some of them working in the shop.
From a manufacturing standpoint, the plants now are largely concentrating on engines of higher horsepower, with licensee companies assuming production loads for the older types of engines on which production procedures have long been established. Particular accent is now placed on the Cyclone 18 since this engine plays an important role in the B-29 program, is also the source of power for other types of military aircraft not yet announced.
The situation in 1944 is a far cry from what it was in 1919, when the Wright Aeronautical Corporation first adopted its present corporate name. At that time, World War I had just ended, there was no commercial market for engines or planes, the company was struggling along with its research and development program on the radial air-cooled engine. Company history goes back originally to the Wright Brothers who in 1909 formed the original Wright Company with the capitalization of one million dollars. After a few years' existence, the original organization became the Wright-Martin Company which during World War I, manufactured Hispano-Suiza engines in a factory in New Brunswick, NJ, then was dissolved in order to survive after World War I. A few months after the formation of the Wright Aeronautical Corporation, headquarters and manufacturing center moved to Paterson, NJ. In 1929 the Wright Aeronautical merged with the Curtiss interests to form the present Curtiss-Wright Corporation and the Wright Aeronautical Corporation.
Through a license agreement between the Brazilian government and Wright Aeronautical, construction of Whirlwinds is under way in the South American Republic. Fabrica Nacional de Motores is the first factory in Brazil to build aircraft engines of American design and it is strictly a local project. Engineers and workers in the ranks are Brazilians as are their supervisors. Native airmen will fly the planes powered by these engines. But the North American concern was responsible for initial training of employees when the project first went into operation. A group of key men came to the US, sat in on Wright's factory training schools at Paterson, took notes at other industrial centers. They went back to Brazil, set up courses and taught their own men the operations to be performed in engine building the US way.
Construction of this South American aircraft plant actually be before February 23, 1942, when Brazil's representatives and Wright Aeronautical officials signed the license agreement. Wright technicians, Lend-Lease officials, General Antonio Guedes Muniz considered the father of the enterprise and a group of Brazilian engineers had already selected the location for the plant, approximately twenty miles outside Rio de Janeiro. Training began, tools were imported and the factory was officially opened in April of this year.
|Army||North American||3 dural|
|Coronado||(PB2Y-3)||Navy||Consolidated||3 dural (outboard)|
4 steel (inboard
Youngest manufacturing unit of the Corporation, the Propeller Division, with headquarters at Caldwell, NJ, has to date produced more than 190 million horsepower in propellers and propeller parts.
Attaining divisional status in 1938, when general acceptance of the Curtiss Electric Propeller required an expanded production schedule and unhampered facilities for development, this branch quickly adjusted to the demand for mass production of propellers, first for national defense, later for war.
In two and a half years since Pearl Harbor the Division has manufactured approximately 175 million of its total horsepower output in propellers and propeller parts.
The name "Curtiss Propeller" was first introduced to the aviation industry in 1915, when blades, made of wood, were manufactured in a corner of the old Curtiss-Burgess plant at Marblehead, MA. Subsequently, propeller operations were transferred to Garden City, Long Island, later to the Airplane Division plant at Buffalo, NY.
An original group of 111 men and women made the transfer from Buffalo to the pioneer Propeller Division plant at Clifton, NJ, in 1938. In the spring of 1941, two new plants were opened the present headquarters factory at Caldwell, NJ, and another plant at Indianapolis, IN, which has since been enlarged and is now completely conveyorized propeller assembly plant in the nation.
Production of hollow steel blades had been undertaken two years earlier when the Propeller Division acquired the hollow-steel-blade processes of the Pittsburgh Screw and Bolt Corporation, Neville Island, Pittsburgh, PA. Hollow steel blade manufacturing operation in Pennsylvania were moved to a new and larger plant at Beaver, PA, in January of 1942.
The continuing development of this prop has anticipated and kept pace with the increased demands of high-powered tactical aircraft. The Division has employed every available facility and method of research, frequently has developed new facilities to improve design, testing and manufacture of blades.
Most recent addition to the Division's development facilities are twin test cells which will allow Curtiss engineers to test propellers up to thirty feet in diameter, mounted on either liquid or air-cooled engines of 5,000 hp or more, duplicating streamlined airflow and vibration conditions similar to those encountered in flight. The largest privately owned Venturi test cells in the country, they are located at the Caldwell, NJ, plant.
The Caldwell plant is also the home of other facilities of the Experentntal Engineering unit. Under this category is the most advanced type of electronic vibration-testing equipment, much of which was developed by Curtiss engineers in order to test under ideal laboratory conditions the vibratory endurance of any propeller component. In addition, the division maintains its own flight-test hangar at an adjacent airport. Such vast expansion of production schedules and attendant functions has not been accomplished without encountering complexities of the wartime manpower problem, The situation was alleviated in part, however, by improved manufacturing processes, and the development and installation of conveyors and complementary mechanical handling devices.
Some 4,500 men and women, many of them highly skilled, have left the Division for military service, but as battlefront duties called them away sources of manpower were found. Today, one out of every four direct production employees and one of every two indirect employees are women. In many cases older men have replaced younger men who have gone to war. Nearly 1700 physically handicapped men and women have been employed in the Division's plants. Employes have been recruited through a wide variety of media, including women's and church clubs, service organizations, high schools, vocational schools, governmental agencies and colleges, as well as radio, newspaper, billboard, streetcar poster advertisements. Employees have been encouraged to bring friends and members of their own families to work in the plants.
An active program for the placement of discharged servicemen and women, both former and non-employes now functions in cooperation with veterans' rehabilitation and other governmental agencies. By May 1 of this year more than 390 discharged veterans of World War II were at work with the Propeller Division.
Since the inception of the personnel expansion program, comprehenseve training programs, both in and outside the plants in conjunction with governmental agencies, have been developed and carried on for the training of new recruits, regular employes and supervisors. These have included apprentice training programs as well as courses in Job Methods Training and Job Instruction Training.
Young women Engineering Cadettes, schooled for almost a year in specially designed courses at Rensselear Polytechnic Institute to serve as technical assistants to the Division's engineers, first joined the engineering staff in December, 1943, quickly proved their worth.
Chalk up many improvements for operation of the three other Curtiss-Wright offspring to the fourth divisnon, a war baby. The Development Division, now housed in its own building at Bloomfield, NJ, was born early in April to chase the Gremlins out of the airplane, engine and propeller plants and to search for, analyze, develop new products.
Many ingenious twists incorporated in the latest versioss of fighter, bomber and cargo aircraft originated in its laboratories and engineers are already at work on products, new methods, new markets for after the war.
The Wright Biothers and Glenn H Curtiss played prominent roles in giving aviation its start. The organization they nurtured has grown up to be tops in aviation production in spite of mistakes made along the way and criticized bitterly. Its contribution to winning the war is enormous and from all indications Curtiss-Wright will probably play an important role in peacetime.
This article was originally published in the August, 1944, issue of Air News magazine, vol 7, no 2, pp 55-56, 58, 60, 62, 64.
Air News was printed on 9½ × 12½ bleached newsprint. My copies have not aged gracefully.
The original article includes 5 portraits of corporate officers, two news photos of personnel, 4 photos of Wright engines, 2 photo of propellers, and two photos of SB2Cs; there are also 3 tables and a map.
Photos credited to Curtiss-Wright. | <urn:uuid:17d05c69-b1de-4dfc-b126-6b1e741dc071> | CC-MAIN-2022-05 | http://legendsintheirowntime.com/LiTOT/Content/1944/Curtiss_AN_4408_Portrait.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00410.warc.gz | en | 0.950262 | 7,604 | 3.015625 | 3 | {
"raw_score": 2.647883653640747,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Transportation |
Blueprint engine technology has been used to make electric cars that can run on petrol, diesel or hydrogen.
The Blueprint Engine has been created by researchers from the University of California at Berkeley, California’s Stanford University, the University and Massachusetts Institute of Technology, and a team from the Australian Research Council.
They hope that this technology will eventually be used to create cars that are environmentally friendly, that do not need to be refuelled and that do a better job of storing energy, particularly when travelling.
But the blueprints could also be used for industrial processes such as welding.
And it could be used as a tool to produce products with the same high performance as a carbon fibre super-strong polymer.
They have published their findings in the journal ACS Nano.
The research is a collaboration between the University, Stanford and the National Science Foundation.
“Blueprints have always been a tool for researchers and for scientists to explore a new technology,” said lead author and PhD candidate Jonathan Bicknell.
“We’re now at a stage where blueprints are starting to be used by industrial applications.”
If we can use this new technology, then we have a lot to build on, but also a lot of questions to answer.
“The Blueprints Engine was created using three technologies.
The first was to use a new approach to building a 3D printer.
This is the process of building a new object in the 3D printing software, and then assembling that object into a working 3D object.
The other two are the two different types of materials the blueprint engines can be used in.
One of these is a material called polyester which can be printed on.
Polyester can be a strong, lightweight and flexible material that can be applied to a wide range of materials.
But there are also materials that can withstand temperatures below -200C.
This makes them ideal for applications that are prone to extreme temperatures.
The blueprints engine is made from a mixture of polyester, a liquid silicone polymer and titanium dioxide, which allows the engine to run at very high temperatures.”
You can actually see a very bright blue, which is because the liquid silicone is heated,” said co-author James P. Kuehn.”
That’s how the liquid melts and then reacts with the titanium dioxide.
This allows you to build the engine on a very small scale, but it is a very flexible material.
“So you can use it to build things like the engine of an electric car.”
The engine is controlled by an onboard computer that uses algorithms to precisely tune the performance of the blue print engine.
The system is also able to calculate the temperatures that the blue printer can run at and the materials that it can be made from.
The team says that, with more research, this could become a powerful tool for industrial applications, but for now the Blueprints engine can only be used with a 3DS printer, and not on a 3-D printer with a fully automated assembly process.
The technology is already being used in a range of applications.
One example is the manufacturing of plastic parts for a range, from clothing to cars.
“It’s very important for us that we get the technology to commercialise before it’s too late,” said Kuehan.
But there is also the potential to make a whole range of items using the Blueprintengine.
“We have the ability to make something that is not only very robust, but that’s flexible, that can work with a range and even have multiple applications,” said Bickng. | <urn:uuid:e7c75e81-9b5f-40ca-9ce1-2197983e7e03> | CC-MAIN-2021-43 | https://yekhane.com/2021/09/17/blueprint-engines-are-being-used-to-produce-cars-that-could-one-day-be-sold-in-the-uk/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00180.warc.gz | en | 0.951074 | 763 | 3.984375 | 4 | {
"raw_score": 2.660335063934326,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Death and Afterlife
The Biblical Silence
Richard Elliott Friedman and Shawna Dolansky Overton
In Judaism in Late Antiquity
Part Four Death, Life-After-Death, Resurrection and the World-to-Come in the Judaisms of Late Antiquity
Edited by Alan J. Avery Peck and Jacob Neusner
We have few unquestionable references to life after death in the Hebrew Bible. Our problem, though, is not only that we have so little to go on. The problem is also that this silence on such a common concern of religion is mysterious itself. We know that ancient Israel’s religion was monotheistic, though we may debate exactly when that monotheism began. We know that it had a hereditary priesthood, a link between religion and law, a concept of divine-human covenant, and doctrines concerning patriarchs who migrated to the land from Mesopotamia and of slavery and exodus from Egypt. We know that it involved circumcision, animal sacrifice, forbidden and permitted animals, seasonal holidays, a sabbath, a Temple, opposition to idols, and the composition of sacred texts. Despite limitations of sources and the distance in time, we know a variety of facts, central and peripheral about the Israelites’ religion. Yet we have been uncertain about what they — from the person in the street to the High Priest — believed happens after death. Though belief in an afterlife was part of Mesopotamian religion to the east and is probably the most famous aspect of Egyptian religion to the west, it has been an enigma for Israel. The average Jewish or Christian layperson today has no idea what ancient Israelites believed; and scholars are uncertain, being dependent on relatively few passages from the text, which we have barely begun to study systematically.
It is not as if death were an uncommon occurrence in ancient Israelites’ experience. Men’s average survival was only in the forties. Women’s was in the thirties. Women’s death in childbirth was almost fifty percent. Anyone who would reach middle age would have lost most of his or her immediate family. Death was so common, so familiar. Why, then, are there so few texts showing any interest in humans’ fate after dying?
It would almost be better if there were no texts at all. Then we could conceive of a systematic rejection of such things by the biblical authors over the millennium that it took to compose the Hebrew Bible. There are, however, just enough suggestive texts to confuse the issue. So rather than biblical silence perhaps we should say that it is just a whisper. This whisper is faint enough to make people, especially non-specialists, imagine that there was virtually no belief in afterlife in constitutive biblical religion or in early biblical Israel thereafter. So, most recently, Neil Gilman, in a book on death in Judaism, asserts firmly and repeatedly that death was seen as final by the biblical writers, with no more than three possible exceptional passages in the entire Hebrew Bible.
But we know that there was belief in an afterlife in Israel. The combination of the archaeological record and the references that we do have in the text leave little room for doubt. Archaeological data indicate the nature of such beliefs. J. W. Ribar (1973, 45-71) identified tomb installations that appear to reflect the existence of a cult of the dead and attendant beliefs that the deceased continued some form of existence after death. Ribar noted tombs that had apertures cut into their ceilings through which it would be possible to give offerings to the dead or which had storage jars placed directly over the heads of the corpses. Ribar’s examples include the Grabkammer II tomb from Megiddo (MB II B-C), Megiddo Tomb 234 (MB II), Hazor’s “porcupine Cave,” (MB II) and the caves in Area E, Gezer (late LB-Iron 1), the Double Tomb 6-7 from Tell Abu Hawam’s cemetery (LB II), Tombs I (Iron IIB) and II (Iron IIC) at Beth-Shemesh, and one bench tomb from Sahab in Trans-Jordan (Iron IIC). Similar findings were made at a Late Bronze installation at Dothan. Although the Dothan material remains largely unpublished, R. E. Cooley (1983: 50-51) has reported concerning Tomb I (c. 1400-1200/1100):
An auxiliary opening or circular window was positioned on the front side directly above one of the chamber niches. Outside the chamber and below the opening two large storage jars had been placed. Each jar contained a dipper juglet for the dead to receive the contents. . . Such provisions give sufficient evidence for the concern of the living to provide the dead with refreshing drinks. It is also possible that the Dothan installation was used for libations. . . At Dothan water would be poured into the chamber through the window opening and then the vessels placed along the stone retaining wall of the shaft. This would acount for the large number of vessels found outside the chamber. Few Palestinian sites have yielded such apparatus to supply water for the the thirst of the dead. The ritual purpose of these devices is clearly evident.
Extensive excavations in Judah have produced an abundance of evidence surrounding Judahite burial practices, even more than have been recovered from northern Israel. Elizabeth Bloch-Smith’s detailed and comprehensive study is particularly valuable. To summarize: there were several different methods of interment, though the most common one for Iron Age Judah was the bench tomb. Despite much diversity in choice of specific goods provided for the deceased, all burials, bench-tomb or otherwise, contained the same categories of goods at comparable relative frequencies. Ceramic vessels and jewelry were most common, while personal items and tools were less so. Bloch-Smith explains that the abundance of ceramic vessels indicates that “nourishment in the afterlife was of paramount importance. An open vessel such as a bowl or crater for food, and a pilgrim flask, chalice or jar for liquids, were the most common forms, frequently accompanied in highland burials by a lamp for light” (141). Beginning in the tenth century BCE, bowls, storejars with dipper juglets, plates/platters, cooking pots, wine decanters and amphoras were widely adopted into the mortuary repertoire. Apparently, these new vessel forms functioned in the preparation, serving and storing of food and liquids. Numerous examples of food remains are further evidence of offerings to the dead (103-8).
Additionally, Bloch-Smith cites the use of jewelry and amulets as evidence of the deceased’s need for protection via sympathetic magic (81-6). Recall also that the oldest known text of a portion of the Bible, an inscription of the Priestly Blessing from Num 6:24-26, was found inscribed in silver foil which was rolled and placed on a body in an Iron Age Judean tomb (Barkay, 1986). Bloch-Smith further suggests that the presence of female pillar figurines in many tombs is best explained as an appeal to sympathetic powers in which the dead were thought to intercede on behalf of surviving family members (94-100).
Israel’s grave goods are similar to those of its neighbors whose afterlife beliefs are well-known. Egypt’s cult of the dead is familiar through pyramids, mummification, and the Book of the Dead. Egyptians’ central concerns were with providing the proper provisions for the deceased, especially of royalty and nobility, so that they might live comfortably in the afterlife and bestow blessings upon the living.
Mesopotamian literature also indicates a vibrant cult of ancestral veneration. Within each family, a “caretaker” (paqidu) was responsible for the care of the ghost (etemmu) of his deceased ancestor. This included performing such important services as making funerary offerings (kispa kasapu), pouring water (me naqu), and invoking the name (suma zakaru) (Bayliss 1973, 116). Necromancy was also a well-developed and intricate art. Magical literature mentions the restless ghost who returns to haunt the living, and works such as The Descent of Ishtar and The Epic of Gilgamesh demonstrate the belief in human afterlife in a world of dust.
At Ugarit, too, there are comparable facilities for providing for the continuing wellbeing of the dead, and there are parallels with allusions to after-death experiences in the Bible. Excavations at Ras Shamra have revealed the use of pipes leading from ground level down into the tomb, which were used to provide the deceased with water (Lewis 1992a, 241). KTU 1.161 describes a liturgy of a mortuary ritual directed toward the deceased royal ancestors, some of whom are called rapi’uma (see our discussion of repa’îm below). The deceased are invoked to assist in bestowing blessings upon the reigning king. Other texts (KTU 220.127.116.11-49; 1.113) refer to the deceased as gods, ilu (see the discussion of ’elohîm below). Some scholars argue that the marzeah at Ugarit and elsewhere was “a feast for and with departed ancestors corresponding to the Mesopotamian kispu” (Pope, 1981: 176). The importance of ancestor worship is also seen in the phrase ’il ’ib, the “divine ancestor,” which occurs at the head of pantheon lists as well as in epic texts and sacrificial and offering lists (Lewis, 1989: 70).
Especially important are recent studies that have advanced our understanding of ancestor veneration among the ancient Israelites as well. Albright began to make the case for ancestral sacrifices in 1957, when he suggested that this was one of the functions of the bamôt in ancient Israel. He concluded that “biblical references to veneration of heroic shrines (e. g. Rachel and Deborah), cult of departed spirits or divination with their aid, and high places in general add up to a much greater significance for popular Israelite belief in life after death and the cult of the dead than has hitherto appeared prudent to admit” (1957: 257). Since then, a number of scholars have pursued this line of inquiry. In particular, H. C. Brichto has gathered an abundance of evidence demonstrating the persistence of ancestor veneration in ancient Israel, focusing mainly on the importance of land ownership in connection with the continuation of a lineage. He stresses the prohibition against selling one’s land forever, stating that with land remaining the property of a family in perpetuity, it belongs “to the dead ancestors and to their unborn descendants — it is a sine qua non of their stake in immortality” (1973, 9). The dead were buried on their land, and their descendents were responsible for the maintenance of the grave. Similar to what we know of ancient Mesopotamian practices, Brichto claims that the condition of the dead in the afterlife is “connected with proper burial upon the ancestral land and with the continuation on that land of the dead’s proper progeny” (1973, 23). Bloch-Smith agrees, stating that “an ancestral tomb, whether located on inherited land or in the village cemetery, served as a physical, perpetual claim to the patrimony. Family proximity to the tomb facilitated caring for and venerating the dead. These functions of the tomb, in addition to the attributed powers of the deceased, made the cult of the dead an integral aspect of Israelite and Judahite society” (1992, 146).
The powers of the dead to which Bloch-Smith is referring would include the ability to know the future as well as the ability to bestow blessings upon one’s descendants, assuming proper obeisance was made to the ancestors. Homage was paid through correct burial, maintenance and care of the grave, and offerings of food, libations, and incense to appease the dead spirits. Deut 26:14 explicitly disallows offering the dead tithed food; but, as many scholars have observed, this prohibition is not against making other offerings of food to the dead. Brichto points out that “not only does this verse attest to the practice, as late as the time of Deuteronomy, of offerings made to the dead; it attests that normative biblical religion accorded them the sanction of toleration” (1973, 29). T. J. Lewis concurs, pointing out that scholars for over a hundred years have suggested that this passage in Deuteronomy may allude to offerings to the spirits of the dead “for the purpose of rendering them propitious to the survivors” (see Driver, 1916, 291-292, whom Lewis cites, though note Driver’s reservations; Lewis 1989, 103).
Baruch Halpern has identified the historical context within which ancestor veneration thrived for centuries and then was forcibly diminished as an increasingly radical monotheistic urban elite gained ascendancy in Judah under Kings Hezekiah and Josiah. Prior to the devastation of the Judean countryside by the Assyrian emperor Sennacherib during the time of Hezekiah, the bulk of the population lived in the rural areas outside Jerusalem and maintained traditional clan and kinship-based communities, largely removed from political influences within Jerusalem. “Along with blood claims and claims on the land, the clan sector (mispahâ) shared its ancestry. Indeed, ancestry and the common treatment of the ancestors were a language in which claims to property could reliably be lodged.” The orientation of each individual kinship community was inward as members of each community shared ancestry and property, and this common heritage formed the basis for such annual ancestral sacrifices as described in 1 Sam 20:6, where this event is David’s pretext for taking leave of Saul. Burial customs reflected this, as Israelite rock-cut tombs prior to the seventh century were multi-chambered, with room for at least four generations of male offspring. Halpern emphasizes the pre-reformation state of Judah’s intra-clan sense of community and continuity with the statement that “the Israelite inherited the house of his ancestors, the fields of his ancestors, the tools of his ancestors, the gods of his ancestors, and, in the end, the place of his ancestors in the tomb” (Halpern, 1991: 57-59).
Halpern’s work is a powerful merging of the archaeological evidence and the textual evidence to capture the place of ancestor veneration in ancient Israel. There is further internal evidence from the biblical text that coincides with the broader external picture from archaeology regarding the afterlife. First, there is a bank of terminology, some of it obvious, some of it not so obvious. The term that occurs most often in connection with after-death existence is Sheol (se’ôl). It appears sixty-five times in the Hebrew Bible. The book of Job refers to it many times, as do the narrative books, Psalms, Proverbs, and some of the prophets. Although most scholars think that it is a name for the netherworld, it is still an enigmatic term in that its original meaning and etymology are in dispute. It is not found in any of the cognate languages. Many suggestions have been made regarding the origin of this word, ranging from a speculative Akkadian su’alu, meaning “underworld,” which most agree is a misanalysis of the Akkadian, to a theoretical proto-Hebrew se’ô (root: s’h), which could best be translated as “nothingness” (Lewis, 1992b: 102). Stronger conjectures have been made by Albright, who first suggested Akkadian origins in the word sa’alu, making Sheol a “place of decision (of fates)” and later settled on a new analysis of Sheol as a place of ordeal or examination, arising out of Hebrew s’l, “to ask,” in the context of inquiry referring to the practice of necromancy (1918: 209-10). Oppenheim compares the roles of the sa’iltu-priestess in Akkadian (1956: 179-373). McCarter’s analysis of the river ordeal in ancient Israel led him to speculate that Sheol might originally have meant “the place of interrogation” (1973: 403-412).
Another important term is repa’îm, which occurs infrequently but seems to denote denizens of the netherworld in Isa 14:9; 26:14 and Ps 88:11 In other places in the Bible we have references to the Valley of the repa’îm (Jos 15:8; 18:6; 2 Sam 5:18, 22; 23:13) and to the repa’îm as one of the indigenous peoples of Canaan (Gen 14:5, Deut 2:11,20; Deut 3:13), but in the Isaiah passages and Psalm 88 the repa’îm are the dead, continuing some sort of existence in an underworld (see also Pro 2:18; 9:18; 21:16; Job 26:5). Given this confusing assortment of meanings for the word, it is fortunate that fifth century Phoenician inscriptions attest to the repa’îm as those whom the living join in dying (KAI 13:7-8, 14:8). As we have seen, it is also found in Ugaritic (KTU 1.61), connoting a line of dead kings and heroes (cf. Isa 14:9). Alan Cooper (1987: 3-4) traces the etymology of repa’îm to Ugaritic Rp’u, a chthonic deity and patron god of the King of Ugarit, associated with healing in the sense of granting health, strength, fertility, and fecundity; hence the Hebrew rapa’, “to heal.” This is important in discussing ancestor veneration in the ancient Near East, as the purposes for revering one’s dead ancestors were often requests for health, strength, and progeny.
The term terapîm is another relevant one, appearing in a number of passages in the context of divination. The etymology of terapîm points to an origin in Hittite tarpis, “spirit” (Hoffner, 1968: 61-68). On the basis of Mesopotamian evidence, K. vander Toorn interprets the terapîm as ancestor figurines which would have been used both at home and in the public cult for divination (1990: 211). According to C. Kennedy, the terapîm were ancestral images that could be life-size, as in 1 Sam 19:13, or as small as a mask; and he notes that the Septuagint translates terapîm in the case of Rachel’s theft from her family in Gen 19:31 as eidolon, i. e. an image of the dead (Kennedy 1992: 106). In 2 Kgs 23:24 they are listed as one of the divinatory and idolatrous items destroyed by Josiah in the course of his reform. Ezekiel envisions the king of Babylon consulting them in tandem with the employment of divination by casting arrows (belomancy) and by reading livers of sacrificed animals (hepatoscopy) in order to obtain an oracle (21:26), and Zechariah has the terapîm speaking in parallel to the diviners who relate false visions (10:2).
Brichto employs this idea of the terapîm as ancestral figurines to support his controversial view of another term, stating that “the physical representations of the household gods. . . are universally presumed to be designated by the Hebrew word terapîm. If this presumption is correct, these representations may be present elsewhere masked under the more general term ’elohîm, ‘gods,’ as they are clearly designated in Gen 31:30, where Laban uses the expression ‘my gods’ for the teraphim filched by Rachel” (Brichto 1973: 46). Brichto and others (Bloch-Smith 1992: 122-123; Lewis 1989: 175; Vander Toorn 1990: 210-11) have put forth the disputed notion that sometimes when the word ’elohîm appears in the Bible it is referring to the spirits of dead ancestors rather than to God. Their best example of this is in 1 Sam 28:13, where the word ’elohîm is taken by some to refer to the ghost of Samuel. It has been suggested that Isa 8:19-21 also appears to be using the term ’elohîm in this way. Bloch-Smith takes this further and postulates that the terms ’elohîm and ’elohê ’abîw often mean “divine ancestors” rather than “God” or “god of his father,” and infers from this that passages such as Gen 28:22; 31:52-54; and 46:1 are actually describing an oath sworn on deceased ancestors and sacrifices being made to ancestral deities (1992; 123). Lewis observes that
Ps 106:28 contains the curious expression ‘sacrifices of the dead’ (zibhê metîm). It is proposed above that the traditional explanation of this phrase as referring to ‘dead idols’ is inadequate. Num 25:2 served as a source for the psalmist who consciously picked up on the phrase zibhê ’elohêhen with his wording zibhê metîm. It is the view of the present work that the key to understanding zibhê metîm lies in recognizing the parallel between ’elohîm and metîm. These two terms, which occur in parallel elsewhere in Ugaritic and Hebrew, can designate the spirits of the dead. (1989, 175)
In the cases considered here, it is possible that ’elohîm was meant to designate spirits of the departed, though, in most occurrences of the word, ’elohîm certainly should be understood to mean “God.”
The word ’ittîm appears only once in the Bible, in a passage in Isaiah (19:3), who seems to have a large vocabulary of words that refer to after-life experience. According to Tzvi Abusch, ’ittîm is cognate with Akkadian etemmu: ghost, shade, or spirit (1995: 588) — which is consistent with the context in Isaiah, which mentions the consultation of ’obôt and yidde‘onîm as well as of ’ittîm.
In several places in which wizards, sorcerers, and other practitioners of forbidden magic are mentioned, we also find the phrase ’ôb weyidde‘onî(m) (Deut 18:11; 1 Sam 28; 2 Kgs 23:24; Isa 8:19). Although the precise meaning of each term is uncertain, the phrase is almost always understood as “necromancer” or “medium.” The term ’ôb is particularly ambiguous because it is found in a variety of contexts in which it can be understood as “spirit, ancestral spirit, the person controlled by a spirit, a bag of skin, the pit from which spirits are called up, a ghost, or a demon” (Kuemmerlin-McLean 1992: 469). It has also been suggested that its etymology should be sought in the Ugaritic phrase il ’ib, usually understood as cognate to Hebrew ’elohê ’abîw but plausibly meaning “god of the pit” rather than “god of the father(s)” (William Propp, personal communication). Although ’ôb is often found on its own (usually when it means “pit” or “bag of skin”), the word yidde‘onî(m) occurs only in tandem with ’ôb. Some scholars take the phrase as a hendiadys while others, along with most translations, see it as referring to separate persons (e. g. medium and wizard) (Kuemmerlin-McLean 1992: 469). The root seems to be yd‘, but what remains unclear is whether the “one who knows” is the spirit being consulted or the necromancer who does the consulting. This phrase is also frequently translated as “one who has a familiar spirit.” No matter how these words are translated, each translation conveys the basic idea of communication between the living and the dead.
The term marzeah, referring to a sort of funerary society, cognate to the Ugaritic mrzh (see above), occurs twice in the Hebrew Bible (Jer 16:5; Amos 6:7; see Halpern, 1979: 121-140; Friedman, 1980: 187-206 for discussion and bibliography).
Additionally, there are terms that usually do not refer to afterlife but which do have such meaning in particular, specialized contexts. Alan Cooper’s treatment of the word ‘ôlam in Psalm 24 is an important example of such a case. Cooper argues convincingly that Ps 24:7-10 is a fragment of a descent myth “in which a high god, forsaking his ordinary domain, descends to the netherworld, where he must confront the demonic forces of the infernal realm” (1983: 43). He sees two possible interpretations: (1) God’s entry into the netherworld to combat Death; and (2) God’s victorious emergence from the netherworld after subduing Death. One of Cooper’s main arguments is that the pithê ôlam are the same as the Egyptian gates of the netherworld. He lists other mentions of the gates of the netherworld in the Hebrew Bible: Isa 38:10; Jonah 2:7; Ps 9:14; 107:18; Job 38:16-17 (1983: 48n.).
Beyond the collection of terms such as these, there are the cases in which afterlife is explicitly expressed. The late book of Daniel speaks of those who sleep in the dust who will wake (12:2). Centuries earlier, Isaiah speaks of the dead awaking and living, using similar language to that of Daniel, as well as referring to the repa’îm (Isa 26:19). And a century earlier than that, 1 Samuel 28 recounts the story of the woman of En-Dor raising Samuel, who complains about being disturbed, criticizes Saul (as usual), and tells the future (Saul’s demise). (On the date of the work to which the En-Dor story belongs, see below.) The terms and explicit references to afterlife occur early and late, in poetry and prose, distributed through the course of the Hebrew Bible.
While arguments from silence must be taken with the usual cautions, we should still note also that the Bible has no criticism of any pagan society for belief in afterlife. Its attack on their icons is so common as to be well known to any Sunday school child. Its attacks on their sexual practices and on their human sacrifices (right or wrong) are numerous in the texts as well. But the closest it comes to polemic about the afterlife is to say that in a particular instance the Egyptians will turn to such sources for help but that this will not help them. Biblical law forbids Israelites from consulting a medium, but that is not a denial of the efficacy of what a medium does nor a criticism of the pagans for doing it. Similarly, the Torah forbids Israelites to practice magic, but it still depicts Egyptian magicians as able to turn sticks to snakes and water to blood.
Indeed, all of the usual cautions apply, of course. The passages in Isaiah and Daniel are poetry. Sheol might just mean the grave, or generically and indefinitely the place where one lies when one dies, without meaning that one has consciousness there. And Samuel in the En-Dor episode, likewise, may simply be understood to be disturbed from eternal unconscious rest rather than from a place where persons are conscious after death. We may take the narrowest view possible of each case and term, but the nature and quantity of them is still too much to write off as a mass of uncertain instances. And, when taken with awareness of the historical and archaeological record, they add up to evidence of belief in an afterlife. (Thus Gilman’s insistence that biblical Israel knew no afterlife is based on such a taking of the narrowest view in every case that he considers and then holding that view to be determinative. Where he treats three cases of explicit reference to afterlife, he says: But that is only three cases — and the text in Daniel is late. And he does not deal with all of the applicable terms. And he does not deal with the archaeological evidence.)
Both in the ground and on the parchment, we have reason to recognize that there were beliefs in life after death in biblical Israel. The question, then, is how to reconcile the biblical whispers with the evidence. How do we reconcile our knowledge that Israelites believed in an afterlife with the relative rarity of textual references?
This requires an examination of authorship. One of the by-products of recent research by one of the authors of this article is some new data that may contribute to the solution of the present question. This research indicates that there is a continuous work of literature that is embedded among the narrative books of the Hebrew Bible. Its beginning is in Genesis, and it ends in 1 Kings 2. In the Torah it includes all of the text that has been known for over a century as J. J flows beyond the Torah, taking up portions of the books of Joshua, Judges, 1 and 2 Samuel, and the first two chapters of 1 Kings. It is thus the first lengthy work of prose known on earth. It tells a continuous story with hardly a single gap between the Torah and Kings, and it was composed by a single author, probably in the ninth century BCE. Now, the relevance of this to the matter of afterlife is that part of the evidence for identifying this work as a unity is the fact that certain terms and themes occur disproportionately in this work — or only in this work and nowhere else in biblical prose. Among the bank of such characteristic terms, we find that of the nine references to Sheol in all the prose of the Hebrew Bible, all nine are in this group of texts, and none in the rest of biblical prose (Gen 37:35; 42:38; 44:29; 44:31; Num 16:30, 33; 2 Sam 22:6; 1 Kgs 2:6,9). Of twelve references to teraphim, eight are in this group of texts, and four in all of the rest of biblical prose.
It is not just a matter of the terminology employed. The author of this work uses imagery that none of the other biblical prose authors use. In the episode of the rebellion against Moses and Aaron in Numbers 16, this author’s story of Dathan and Abiram has been combined by an editor with the parallel story of Korah from P, but the two stories have notably different endings. In the P Korah account, Korah’s followers are burned; but in the J account (16:30-34) Dathan and Abiram along with their families and possessions are swallowed up by the ground and go down to Sheol alive. It is also in this author’s narrative that the story of the medium at En-Dor communicating with the deceased Samuel occurs. It is also in this work that the story of Israel’s heresy at Baal Peor occurs (Num 25:1-5), involving the possible case of “sacrifices of the dead,” (zibhê metîm/zibhê ’elohêhen ), as discussed above. And many times in this work there are reports of a man being buried in his father’s tomb: Gideon (Judg 8:32), Samson (16:31), Asahel (2 Sam 3:32), Ahitophel (17:23 ), and Saul and Jonathan (21:14). The words “He was buried in his father’s tomb” do not occur anywhere else in the Hebrew Bible. And it is also this work that notes in the account of Moses’ death that “no man knows his burial place to this day” (Deut 34:6). The fact that this author is concerned with the location of the grave is notable because ancestor veneration is crucially linked with the actual grave site. As Halpern has shown, the forced separation of the people from these sites in Hezekiah’s reign was a turning-point in the triumph of Israelite monotheism and centralization of worship. This report that no one knows Moses’ burial place has been taken to “emphasize the very finality of the death of this man,” (Gilman: 64 ), but it is a stretch to imagine an author choosing to raise this fact in order to convey that message. In a world in which ancestor veneration is practiced, we would more readily expect this report to mean just the opposite, emphasizing that the non-knowledge of the burial place is a striking fact because ancestor veneration must be linked to a burial place. And all of the other reports about burials in the family tomb in this work support the likelihood that this is the author’s concern.
Now, none of the other sources of the Torah has any of this terminology or this imagery. What is the difference between this author and the authors of all of the rest of the Torah? The most prominent distinction that comes to mind is that this author is a layperson while all of the others are priests (Friedman, 1987: 72-74, 79, 83, 85-86, 120-124, 128, 188, 210-211, 214). The authors who are priests do not discuss conceptions of the afterlife except in the context of prohibitions. Restrictions against contact with the dead and involvement in certain mortuary practices can be found in both the Deuteronomistic legal material and the Holiness Code: Deut 18:10-11 and Lev 19:31; 20:6; 20:27 prohibit the consultation of dead ancestors either directly or through necromancers and other intermediaries. Deut 26:14 forbids feeding the dead tithed food, and Deut 14:1 and Lev 19:27-28; 21:5 all object to engaging in the self-laceration rituals employed in Canaanite death cult practices. Lewis observed that “priestly material seems almost preoccupied with the defiling nature of the corpse, the bones, and the grave. This preoccupation stands out in contrast to the surrounding cultures of the ancient Near East and may indeed be a reflection of an attempt to combat a cult of the dead” (1989, 175). But, again, this does not necessarily deny that such mechanisms of communicating with the dead are effective. Brichto, too, notes that “the prohibition of recourse to the dead for oracles is in no way a denial of their existence in an afterlife, of their accessibility to the living and of their interest in them” (1973: 8). This is why it would have been in the priests’ interest to suppress the proliferation and undermine the legitimacy of cults of the dead. These cults were not limited to the employment of necromancers. Indeed, specialists were not needed at all to propitiate the good will of the deceased and accrue blessings. The best way for an ancient Israelite to ensure health, prosperity, and fertility was to propitiate the family’s dead ancestors. This did not require a priest, it brought no income to the priesthood, and it could even compete with priests’ income and authority.
The authors of the Priestly portions of the Torah (P) promoted precisely the opposite idea: the only legitimate avenue to the deity is the priests. In the Priestly work there are no angels, no dreams, no talking animals. There are not even prophets. The very word “prophet” occurs only once, and there it refers figuratively to the High Priest, Aaron (Exod 7:1). There are no accounts of sacrifices prior to the inauguration of Aaron as High Priest. And no formal worship is permitted outside of the Tabernacle — which means the Temple, either really or symbolically. There is no description of the creation of any realm of the dead in the Priestly creation account in Genesis 1. (There is none in the J account in Genesis 2 either, but that account does not pretend to be a picture of all of creation in the way Genesis 1 is; thus it also does not include the creation of the heavenly bodies or the seas.) For P there is one God, one Temple, one altar, one sanctioned priesthood.
When the Priestly narrative deals with a family tomb — specifically, in the case of the cave of Machpelah — the focus is explicitly on the purchase of the cave and the land surrounding it, both in the original story (Genesis 23), where the transaction is described in detail and the purchase price is specified, and in every mention of the cave thereafter (Gen 49:29-33; 50:12-13). Rather than relating to ancestor veneration, this focus serves the function of establishing the legitimacy of Israel’s ownership of Hebron, the locale of the cave, which was a city assigned to the Aaronid priests, the group who produced the Priestly narrative (Josh 21:8-11).
Though most of the field of biblical scholarship continues to date the Priestly texts to the post-exilic period, the weight of the current evidence, particularly the linguistic evidence, points to the time of the First Temple (Friedman, 1987: 161-216; 1992: 605-622). The Hezekian dating of these texts naturally connects them with the centralization of the Israelite religion that is ascribed to that king’s reign. It also corresponds to Halpern’s historical description of the religious and political change that followed the destruction of the northern kingdom of Israel and Sennacherib’s campaign against the southern kingdom of Judah. The devastation of the countryside made possible the idea of centralizing the priestly authority in Jerusalem. This was precisely the era of the end of the local ancestral veneration sites. But ancestor veneration must be on the site (so Brichto, Halpern, and Bloch-Smith).
A combination of Sennacherib’s campaign and Hezekiah’s political ingenuity put an end to this traditional community in rural Judah. According to Halpern, in order for Hezekiah to justify sacrificing the outlying communities’ lands to Sennacherib’s armies he had to desacralize the land itself by discrediting traditional ancestral worship; this in turn allowed him to accomplish the centralization of worship in Jerusalem (Halpern 1991: 26-7, 73-76). “For Hezekiah’s purposes, it had been essential to amputate the ancestors, those responsible for the bestowal of rural property to their descendants: they, and they alone, consecrated the possession of land” (74). Without their traditional ancestral lands, the people’s ties and sense of community were of necessity transferred to the monarchy, and competition between ancestor veneration and centralized worship at the Temple in Jerusalem was eliminated. This understanding is corroborated by the advent of a new type of burial, in which individuals, married couples, and occasionally nuclear families were buried in a communal necropolis rather than in family crypts (73).
The Priestly (P) narrative and laws thus reflect this stage in the history of the religion of Israel, when a centralized priesthood displaced local worship that had included ancestor intercession. The sources of the Torah known as E and D reflect the same concerns as P. Their authors appear to come from a different priestly house, identified in some recent scholarship as Shilonite or Mushite (Cross, 1973), but their interests on this point are the same. They were probably against local ancestor veneration from the beginning — as reflected in E, which is from the time of the divided monarchy (pre-722 BCE), and in the oldest portions of the Deuteronomic law code (which may be even older than E) — because it was performed at home, with no need of an altar or a priest as intermediary. They were certainly against it from the time of King Josiah, whose religious reform was even more radically centralizing than Hezekiah’s had been. The Shilonite/Mushite priesthood came to power with Josiah, and D is traced to that period. As Halpern has characterized Josiah’s reform: “It was a systematic effort to erase from the nation’s history the memory not just of a royal predecessor but of a whole culture” (1996: 329). By the time of Josiah, this centralizing tendency had heightened as the state religious practice became the moral norm, and the assault on kinship grew more radical, particularly because Manasseh’s intervening reign had allowed the re-establishment of many of the traditional high places and centers for non-Jerusalem religious activity (Halpern 1991: 74). By the time of Josiah, no such activity was tolerated outside the Temple; as Halpern states, “the state, now, acted as a surrogate for the old tribal institutions, while professing all the while the ideology of those institutions” (76).
Thus E and D also avoid the issue of life after death, except to prohibit contact with the dead in several passages of Deuteronomy. Like P, they are by priests; and, like P, they are silent on life after death.
When we move on from the Torah to the histories, the situation is the same. The full Deuteronomistic history, which extends from Deuteronomy through 2 Kings, comes from the same hands as Deuteronomy itself, and those are the hands of priests, the Mushite or Shilonite priests who rose with Josiah. And the Deuteronomistic historian is silent on afterlife. Where the history includes anything to do with this subject, it is found in a passage that is manifestly from one of the historian’s sources, not from the historian himself. The story of the medium at En-Dor, for example, belongs to the source work that we described above. It does not contain any of the characteristic language of the Deuteronomistic historian himself.
Similarly, there are the three resuscitation stories in the books of Kings. Elijah and Elisha each participate in bringing a dead boy back to life (1 Kings 17, 2 Kings 4), and a chance contact with Elisha’s bones revives a dead man on his way to burial (2 Kings 13). One may say that these stories do not necessarily imply the existence of any realm of conscious afterlife in any case; but, even if we take the view that they do imply some such realm of post-mortem existence, they, too, contain no characteristic Deuteronomistic language. They rather belong to one of the Deuteronomist’s sources, a chronicle of the northern kingdom of Israel. The same applies to the story of Elijah’s ascent in a whirlwind in 2 Kings 2. The story is often taken to mean that Elijah does not die. Alternatively, it may be precisely the account of his death. Either way, though, it belongs to the source, not to the historian’s own composition.
One may ask why the Deuteronomistic historian retained these stories if they presented things in which he did not believe. The long answer would involve a proper analysis of how each of the biblical editors and historians worked and what their respective attitudes were toward their sources. The brief answer for our present purposes is that the Deuteronomistic historian included lengthy source texts without apparently feeling the need to make constant interruptions and cuts, so long as he could compose the introduction, framework, and conclusion to set the history in his particular perspective (Friedman, 1987: 130-145; 1995: 70-80).
The other major historical narrative, the Chronicler’s Work, comprising the books of Chronicles, Ezra, and Nehemiah, is commonly associated with the same priestly community that produced the Priestly Work (P), i. e. the Aaronid priesthood. It should come as no surprise that the Chronicler’s Work, like other priestly products, does not deal with afterlife.
This same distinction between lay and priestly writers prevails in the Major Prophets. Jeremiah and Ezekiel are priests. Jeremiah is associated with the priestly house that produced the Deuteronomistic texts; Ezekiel is associated with the Aaronid priestly house that produced the P texts. Neither of them is known for afterlife terminology. The book of Jeremiah in particular does not include occurrences of terminology associated with post-mortem existence. Ezekiel has the famous dry bones that become animated, but it is not at all clear that this points to any Jewish belief in afterlife in that period. In the first place, it is just a metaphor within a vision, used to express the possibility of regeneration of Israel after a national catastrophe. Second, it refers to resurrection, not to the existence of a post-mortem realm. Third, and above all, it may just as likely be evidence against belief in afterlife as for it. One could argue that it shows that Jews believed in such things, or one could just as easily argue that it is presented as something extraordinary and unanticipated, something that was not commonly believed.
Meanwhile, the prophecy of (First) Isaiah, who is not a priest, is filled with allusions to afterlife experience, employing such terms as se’ôl, repa’îm, and ’ittîm. This is surprising in that Isaiah is presented as a supporter of Hezekiah, the king in whose reign the centralization of religion triumphed and the Aaronid priesthood achieved ascendancy over other priestly houses and the sites of ancestor veneration were wiped out. It may be, nonetheless, that the prophet could consistently both support all of the royal ideology and still speak of afterlife as a phenomenon. As a layperson he would not necessarily have the same stake as a priest in actively suppressing any talk of afterlife in his writings.
Why would priests in general be so averse to discussing anything regarding after-death experience, while someone like the author of J incorporates it as though taking for granted that it was part of his or her readers’ world view? First, as we discussed above, local ceremonies for dead ancestors did not require a priest, brought no income to the priesthood, and could even compete with priests’ income and authority. The priests’ livelihood was dependent on sacrifices to YHWH, and the priestly laws were designed in such a way as to ensure that all aspects of interaction with the divine were conducted only through the priests. If a belief in an afterlife was encouraged, and necromancy was given legitimacy as a means for knowing the divine will, then the priests would be ceding a portion of the control of the religion.
In this vein it is important to note that in the story of the woman of En-Dor, Saul is seeking a necromancer precisely because “Saul saw the Philistines’ camp and was afraid, and his heart trembled very much. And Saul asked of YHWH, and YHWH did not answer him, neither through dreams nor through Urim nor through prophets” (1 Sam 28:5-6). Thus, although he has banished all necromancers from the land in accord with the law, in order to learn his fate Saul must turn to one of them for answers; and she is able to conjure up Samuel. Thus, through the necromancer, Saul succeeds in circumventing YHWH’s silence. The dead Samuel has access to knowledge regarding the will of YHWH and is able to predict Saul’s future for him. The inherent lesson in this is that one need not go through priests and legitimate means of inquiry regarding the divine will so long as one has recourse to inquiring of the spirits of the deceased. Since the priests were not necromancers, this was not an aspect of religion that they could control. It is thus natural that they not grant it legitimacy by even hinting at the existence of an underworld or an afterlife in their writings despite the fact that such beliefs may have been popular and widespread.
A second explanation of the priesthood’s opposition to afterlife beliefs is related to the hypothesis that the priestly group, the Levites, originally came from outside the land. On this hypothesis, the Levites were the Israelites who had experienced the enslavement and exodus and who then entered Israel and merged with the tribes who already resided there. They therefore did not have ancestral territory, which is essential to local veneration practices. (That is why the Levites receive ten percent of the other tribes’ produce; it is in lieu of land.) Their ancestors were presumably buried back in Egypt. Even if they had originated in Canaan prior to their sojourn in Egypt, they were long cut off from the burial sites of their ancestors.
A third possibility is that perhaps the laws forbidding contact with the dead for priests were in place before necromancy and such things were popular or before the Levitical priests came into contact with this type of religion, and so priests could not have anything to do with those practices.
Another possibility is that the Levites, coming from the Egyptian experience, were reacting against Egyptian religion’s obsession with the dead. To find the origin of an antipathy to such beliefs and practices, we might be best advised: Go back to Egypt — and to the founder. If a historical Moses really was offended by all of that concentration on death, he might well have bequeathed to his followers and successors a strict rejection of the entire scheme.
These possibilities are all speculative, and we raise them only to establish that there are in fact historical scenarios in which we can conceive of a priesthood which is at odds with the masses regarding afterlife. And we know of the historical scenario in which the religion of Israel was centralized — first in the reign of Hezekiah and then more thoroughly in the reign of Josiah — which would have made it possible for the priests’ view to predominate over the popular view.
We therefore advise caution against taking what appears to be a thin distribution of references to afterlife as evidence that there was little belief in it. Similarly, we caution against scholarly models of linear progressions, seeing Israel as moving from periods of belief to complete rejection (Job 14:12-14; Eccl 9:5-10) to a final full-blown belief (Dan 12:2 — “those who sleep in the dust will wake. . .”) (Spronk 1986; Lang 1988; Pitard 1993). The problem with both of these positions is that they do not take sufficient account of the specific background and situation of each of the biblical authors. It makes a difference whether the author was a layperson or priest. And the historical events that led to religious centralization made a difference in who had the opportunity to tell the story — and how each would tell it.
It is difficult to state with any degree of certainty that the texts cited as evidence of such radical changes in perspective in a linear progression, such as Job, Ecclesiastes, and some of the Psalms, were representative of the views of a community. To claim that the skepticism of Job and Ecclesiastes with regard to consciousness in Sheol signified a shift in cultural conceptions of the afterlife simply makes too much out of the points of view of individual writers. Even assuming, as many scholars seem to, that these texts were written during roughly the same period, the fact that they both question the existence of life after death cannot be taken as representative of growing doubt on the part of the entire society. And the caution that we are advising here with regard to the wisdom literature applies at least as much to the Psalms. The notorious difficulty in dating the Psalms should make one slow to construct any linear progression of beliefs about the afterlife through them.
Even more caution is called for when bringing the book of Daniel into the progression. There are references to resurrection or resuscitation in various places in the Hebrew Bible: the three Elijah-Elisha cases, Isaiah 26, Hos 6:2, and the dry bones metaphor in Ezekiel 37. Yet, in order to make their linear progression of ideas viable, scholars discount or downplay each one until Daniel. Then, “those who sleep in the dust will wake. . .” is cited as representative of a change in Israelite conceptions of death and after-death experience which, according to this line of thinking, began to take place during the second and first centuries BCE as a reaction to the idea that there was no conscious existence in Sheol, and from there led directly into the formation of Christianity. But the history of thought rarely moves in a precise linear progression, and those references cannot be completely excluded. At minimum they suggest that the concept was a familiar one in Israel for a long time. The fact that it is so explicitly portrayed in Daniel cannot be taken as indicative of a sea change in Jewish thought of the second century BCE. As R. Martin-Achard put it, “Texts relating to resurrection in the Old Testament are rare and dissimilar; they come from different horizons and we cannot simply examine them in chronological order to retrace the history of this theme in the mind of Israel” (1992, 683).
The truth of this statement has become apparent, and not only with regard to the concept of resurrection; none of the ideas connected with afterlife beliefs can be traced in a linear historical progression. We have too few literary voices remaining from each time period to hope that each could represent the thoughts and beliefs of the aggregate accurately. We have seen that there is not simply one view of the afterlife that can be generalized for all of ancient Israel over the thousand year period of the Hebrew Bible’s composition. On the contrary, conflicting views can prevail simultaneously. Rather than attempting to extract a single, unified notion of the afterlife in ancient Israel which progresses linearly through time, we must instead investigate each reference to mortuary rites, the netherworld, veneration of deceased ancestors, necromancy, and resurrection within its own literary-historical framework, with the understanding that each author, within his or her own political and spatio-temporal context, might have a distinct idea of what happens to humans after they die, what they become, and what the proper relationship should be between the living and the dead.
Abusch, Tzvi, 1995. “Etemmu,” Dictionary of Deities and Demons in the Bible, Karel vander Toorn, Bob Becking, and Pieter W. van der Horst, eds. NY: Brill, 588-594.
Albright, William Foxwell. 1918. “The Etymology of Se’ol,” AJSL 34:209-10
______, 1957. “The High Place in Ancient Palestine,” Volume du Congres Internationale pour l’Etude de l’Ancien Testament, VTSup 4: 242-258.
Bailey, Lloyd R. 1979. Biblical Perspectives on Death. Philadelphia: Fortress.
Barkay, Gabriel, 1986. Ketef Hinnom: A Treasure Facing Jerusalem’s Walls. Jerusalem: Israel Museum.
Bayliss, Miranda. 1973. “The Cult of Dead Kin in Assyria and Babylonia,” Iraq 35: 115-25.
Blenkinsopp, Joseph. 1995. “Deuteronomy and the Politics of Post-Mortem Existence,” VT 45: 1-16.
Bloch-Smith, Elizabeth. 1992. Judahite Burial Practices and Beliefs about the Dead. JSOT Supplement Series 123, Sheffield.
Brichto, Herbert C. 1973. “Kin, Cult, Land & Afterlife — A Biblical Complex,” HUCA 44: 1-54.
Cooley, R. E. 1983. “Gathered to His People: A Study of a Dothan Family Tomb,” in The Living and Active Word of God, M. Inch and R. Youngblood, eds. Winona Lake, IN: Eisenbrauns, 47-58.
Cooper, Alan. 1987. “MLK ‘LM: ‘Eternal King’ or ‘King of Eternity’?” in Love & Death in the Ancient Near East: Essays in Honor of Marvin H. Pope. J. H. Marks and R. M. Good, eds., Connecticut: Four Quarters, 1-8.
. 1983. “Ps 24:7-10: Mythology and Exegesis.” JBL 102: 37-60.
Cross, Frank Moore. 1973. Canaanite Myth and Hebrew Epic. Cambridge, Mass: Harvard.
Driver, Samuel R. 1916. A Critical and Exegetical Commentary on Deuteronomy. NY, Scribner’s.
Friedman, Richard Elliott. 1979-80. “The Mrzh Tablet from Ugarit,” Maarav 2: 187-206.
. 1987. Who Wrote the Bible?, New York: Summit/Simon & Schuster; 2nd edition, San Francisco: HarperCollins, 1997.
. 1992. “Torah,” The Anchor Bible Dictionary VI: 605-622.
______. 1995. “The Deuteronomistic School,” in Fortunate the Eyes That See, David Noel Freedman Festschrift , Astrid Beck et al, eds., Grand Rapids, MI: Eerdmans, 70-80.
______. 1998. The Hidden Book in the Bible (San Francisco, HarperCollins)
Gillman, Neil. 1997. The Death of Death. Woodstock, NY: Jewish Lights.
Halpern, Baruch. 1991. “Jerusalem and the Lineages in the Seventh Century BCE: Kinship and the Rise of Individual Moral Liability,” in Law and Ideology in Monarchic Israel, Baruch Halpern and Deborah W. Hobson, eds., JSOT Supplement Series 124, Sheffield, 11-107.
. 1981. The Constitution of the Monarchy in Israel, Harvard Semitic Monographs, Atlanta: Scholars Press.
. 1979. “A Landlord-Tenant Dispute at Ugarit?” Maarav 2: 121-140.
_____. 1996. “Sybil, or the Two Nations?” inThe Study of the Ancient Near East in the Twenty-First Century, J. S. Cooper and G. M. Schwartz, eds., Winona Lake, IN: Eisenbrauns, 291-338.
Hoffner, Harry A., Jr. 1968. “Hittite tarpis and Hebrew teraphim,” JNES 27: 61-68.
Kaufmann, Yehezkel. 1960. The Religion of Israel, trans. and ed., Moshe Greenberg, Chicago: University of Chicago. Hebrew edition, 1937.
Kennedy, Charles A. 1992. “Dead, Cult of the,” The Anchor Bible Dictionary 2: 105-108.
Kuemmerlin-McLean, Joanne K. 1992. Magic. The Anchor Bible Dictionary 4: 468-471.
Lang, Bernhard. 1988. “Afterlife; Ancient Israel’s Changing Vision of the World Beyond,” Bible Review 4: 12-23.
. 1986. “Life After Death in the Prophetic Promise,” VTSup 40 Congress Vol Jerusalem: 144-156.
Lewis, Theodore J. 1992a. “Ancestor Worship,” The Anchor Bible Dictionary 1: 240-242.
. 1992b. “Dead, Abode of the,” The Anchor Bible Dictionary 2: 101-105.
. 1989. Cults of the Dead in Ancient Israel & Ugarit. Harvard Semitic Monographs 39. Atlanta: Scholars Press.
Martin-Achard, Robert. 1992. “Resurrection,” The Anchor Bible Dictionary 5: 680-84.
Meyers, Eric M. 1971. Jewish Ossuaries: Reburial and Rebirth, Biblica et Orientalia 24, Rome: Biblical Institute Press.
McCarter, P. Kyle, 1973. “The River Ordeal in Israelite Literature,” HTR 66:403-412.
Oppenheim, A. Leo. 1956. “The Interpretation of Dreams in the ANE with a Translation of an Assyrian Dream Book,” TAPhS N.S., 46: 179-373.
Pitard, Wayne T. 1993. “Afterlife and Immortality; Ancient Israel,” The Oxford Companion to the Bible, B. M. Metzger and M. D. Coogan, eds., Oxford, 15-16.
Pope, Marvin. 1981. “The Cult of the Dead at Ugarit,” Ugarit in Retrospect, G.D. Young, ed., Winona Lake, IN: Eisenbrauns, 159-179.
. 1977. Song of Songs, The Anchor Bible, New York: Doubleday.
Ribar, J. W. 1973. Death Cult Practices in Ancient Palestine. Diss. University of Michigan.
Richards, Kent H. 1992. “Death (OT),” The Anchor Bible Dictionary 2: 108-110.
Schmidt, Brian B. 1994. Israel’s Beneficent Dead: Ancestor Cults in Ancient Israelite Religion and Tradition.. Tubingen: J. C. B. Mohr.
Smith, Mark S. 1994. “Rephaim,” The Anchor Bible Dictionary 5: 674-676.
. and Elizabeth M. Bloch-Smith. 1988. “Death and Afterlife in Ugarit and Israel,” JAOS 108: 277-284.
Spronk, Klaas. 1986. Beatific Afterlife in Ancient Israel, AOAT, 219. Neukirchen-Vluyn: Neukirchener Verlag.
Tromp, N. J. 1986. Primitive Conceptions of Death and the Nether World in the Old Testament, Biblica et Orientalia 21.
Vander Toorn, Karel. 1990. “The Nature of Biblical Teraphim in the Light of Cuneiform Evidence,” CBQ 52: 203-222.
This was developed in an unpublished paper: Richard Elliott Friedman, “The First Great Writer,” which was read at the Biblical Colloquium and in colloquia at Cambridge; Yale; The Hebrew University of Jerusalem; the University of California, Berkeley; and the University of California, San Diego. It now appears in R.E. Friedman, The Hidden Book in the Bible (San Francisco, HarperCollins, 1998). | <urn:uuid:a2d17ff9-ea66-4a3c-9c48-8d6616c9018c> | CC-MAIN-2021-49 | https://richardelliottfriedman.com/2017/06/02/death-and-afterlife/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00411.warc.gz | en | 0.94859 | 13,406 | 3.4375 | 3 | {
"raw_score": 3.016223430633545,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Religion |
Last Updated: June 1, 2020
En route to your internet journey, the router is your most trustworthy companion. What is a router, you ask? Furthermore, what is a router in plain English?
It is a device that makes your internet connection possible. Just like an experienced mailman who knows all the addresses and always puts the letters in the correct mailboxes, the purpose of a router is to navigate, allocate and deliver data. The speed compared to the mailman is a bit different, of course.
From Mailman to Router
Which brings us to the internet router definition: a device which forwards data packets to the appropriate parts of the computer network. These packets are the lifeblood of any communication. Just like our mailman who carries bags of letters.
His job is pretty much easy to understand, but how does a router work? Simply put, the router is the magic that happens behind the click of the mouse. In a matter of seconds, data flies and the information starts coming your way.
A router can send a thousand days worth of letters in only a day. That surely requires some advanced technical setup, right? Well, not necessarily. Most internet service providers will assist you with setting up the router. Nevertheless, it is always handy to know a thing or two about the gadgets you are using.
Now that you know the internet router definition, you might ask: what is the purpose of a router, then? Let me give you a very simple example.
Some time ago, you needed to go to a library to find a specific book on penguins. Then you had to read and highlight the essentials so you can write your school report.
This is a data-gathering process in its own right, only quite slower than the technological equivalent. The definitive purpose of a router is to make your life simpler. All the information, all the data is in front of you. Sounds great, but…
What Does a Router Do Exactly?
In technical terms, it connects to a modem, be it fiber, cable or DSL (Digital Subscriber Line) connection. In most cases, the router needs to connect physically to a modem via a network cable. It may use an internet or WAN port connection or LAN – Local network connection.
In the first case, the IP address is public, while assigning it to a LAN makes it a private IP. In case you are wondering “What is my router IP” – most manufacturers use the default LAN IP address – 192.168.0.1 or 192.168.1.1.
Otherwise, you can always check your IP manually via the control panel and settings. But wait, you mentioned a modem, so what is the difference:
Modem vs. Router
The modem and the router are practically the same device and are often used in the same context. Typically, your ISP lends you both, so it can be easy to confuse them. The distinction is the modem connects you to the internet, while the router connects your devices to Wi-Fi.
The modem functions as a transmitter from digital to analog waves and vice-versa. This allows for a digital computer to communicate across an analog phone, which is designed to carry speech.
This does not mean that you absolutely need to have both a modem and a router. You can combine both devices and achieve the status of a modern-age computer God… or at least, be as fast as possible. Some ISPs offer a modem and a router in a single device.
While this does allow you to deal with fewer cables, you may want to stick with using dual devices. Separate devices give you more flexibility and in some cases – make you more productive. And everyone wants to have the fastest speed, right?
Speaking of cables and wires, the wireless internet router is probably the best thing since sliced bread! On top of having all the functionality of a router, it also includes features such as a wireless access point. It can function in a wired local area network and in a wireless-only LAN, or in a mixed wired and wireless network.
Let’s have a look at different types of routers, shall we?
A brouter or a bridge router is a routing software that works beautifully with… bikes. It offers alternative route calculations, it works fully offline on any Android phone, considers elevation, long-distance cycle options, etc. It’s a must for all bikers.
The core router operates in the backbone of the internet. This type of router must be able to support multiple telecommunication interfaces of the highest speed. A core router’s job is to forward packets to computer hosts within a network (but not between networks).
However, an edge router does exactly the opposite – residing at the edge of the network, this router must ensure its connectivity with other networks, a wide area network or the internet. Together with the core router, they make sure that points on different networks can exchange information.
The virtual router uses software to enable a computer/server to work as a full-fledged router. VRRP (Virtual Router Redundancy Protocol) may implement virtual routers to increase the security and reliability of the network.
Wireless Is the Way to Go
The wireless router, also known as WiFi router is a huge revolution in modern society. How many times have your heard someone (like your grandma) who has no clue about computers or the internet, asking about WiFi? A wireless router can typically reach a range of 150 feet when the connection is indoors and up to 300 – outdoors. Of course, bear in mind that having many obstructions like walls and other objects may further reduce the indoor range by up to 25%.
The WiFi revolution got started by the company Linksys in 1999. Since then, technology underwent a fundamental evolution. In the year 2000, WiFi speed used to peak at 11 Mbps. In 2013, the gigabit per second barrier was finally broken, allowing us to enjoy ultra-fast networking speed. All YouTubers and vloggers must be extremely thankful, right?
Today’s generation of routers is almost exclusively wireless. Why deal with cables and only connect a few devices at a time, when you can connect to as many as you wish without even thinking about it?
Do You Need a Network Router with an Antenna?
If you’re serious about good design, you may wonder if your router really needs an antenna.
While an antenna can improve the signal in a certain direction, generally the performance of the router is tied more closely to its price bracket.
There are high-end routers with internal antennas that out-perform many external antenna routers. If you do go with an external antenna, though, there are some tricks to get the most out of it. Find the sweet spot for the router, so that the Wi-Fi signal gets sent more or less uniformly in all directions. Good options for an increased signal radius include creating a virtual access point or using a Wi-Fi repeater.
Alright, What About Router Price?
Nobody wants to spend on a router more than they have to. The good news is, depending on what you do with it – you may easily get away with a simpler, more affordable model.
It does depend on a couple of factors ranging from how big your home is to how much security you want and whether you’re a gamer or not. If you are a passionate gamer, you will, without doubt, be looking for the very best router.
A good choice would be the Asus ROG Rapture GT-AC5300 – it provides 8 wired LAN ports, has a high-security value, protecting you from malware and viruses. It is also IPv6 compatible.
Another very reliable device is the Netgear Nighthawk X10 AD7200 Smart WiFi router. It’s extremely fast and gives you the chance to do some quality 4K Streaming, VR Gaming and utilize the Plex Media Server.
There are also some cheaper options on the market. You can easily have a powerful and stable router for under 50 dollars. The TP-Link AC1200 is a very good solution for its price, providing enough speed to watch videos in HD, as well as a solid online gaming platform. The TP-Link Archer C7 is also a quick and easy option, as it is easy to set up and performs well.
What Is Router Security and Why It Is Important?
As with anything that goes online, security is a key aspect of your router. There are some steps you can take to increase your security.
Something not a lot of people are aware of is that you can install a VPN directly on your router. The wins in this scenario are quite a lot. Not only you can surf the internet with always-on security and privacy, but you can also benefit from coverage for devices that don’t support VPNs.
Another thing you can do is change the password of your router. Avoid using the default password, as it is easy to guess. Dictionary words are also not a safe bet, as these passwords are still fairly easy to break.
Long passwords are more secure than shorter ones – it’s easily worth it to go as high as 16 characters in length. Typically, a mixture of lowercase letters, capital letters, digits, and symbols is the optimal choice.
Wonder what a good password may look like? It’s not necessarily hard to remember. Something like tobeornottobe–>THATisthe? for the bookworms, or how about 123New123York34Yankees34 for the sports fans.
This is the easiest step you can take to increase security, so do change the password, it will not take more than a minute or two. And you are always better safe than sorry.
Finally, there’s always the option to use a password manager that will remember your complex passwords for you.
There’s also the matter of encryption
The Wi-Fi encryption should be WPA2 with AES, rather than WEP or WPA. This way you encrypt the data traveling between a Wi-Fi device and a router or point of access (known as over-the-air encryption). WPA2 encryption is unanimously accepted as the best, most secure option out there.
Never use WEP – the method has not aged gracefully and was proven to be crackable years ago. It is still used in out-of-date systems – so feel free to check and take immediate action if you are using WEP.
Next step is to turn off UPnP (Universal Plug and Play). Most consumers never change UPnP, which can explain why recent cyber attacks were executed through a standard router configuration – which has the Universal Plug and Play enabled.
Now don’t get me wrong, UPnP is a very convenient way for your gadgets to automatically find other devices on the network. This process greatly decreases the complexity of setting up new devices. Still, that comes with a certain weakness.
There are multiple vulnerabilities within the UPnP, which are exploited by hackers for large-scale attacks. Without being properly protected or even being aware of the threat, you are possibly exposing your private data (like credit card information) for all to see.
Additional Security Measures
Another thing you might want to check out is the SSID (Service Set IDentifier) name, which in simple words is the name of your wireless network. You should change the default SSID name for a couple of reasons. In most cases, hackers view a default SSID name as a “hack me” sign.
Furthermore, when choosing your network name, avoid using personal data or any information that may be obtained easily. Don’t make it easy for the baddies to target you.
It is also better to turn off the WPS – Wi-Fi Protected Setup. Such a protocol in consumer routers is so easy to hack. In fact, it would be even better if your router did not support WPS – as that can remove the vulnerability altogether. Still, if it does, make sure to turn it off.
Security measures you can take don’t end here, unfortunately. It also pays to check and turn off the remote administration as well, if it is turned on. And finally, test your router and make sure to periodically check for new software updates.
You should always have the latest updates of the firmware to avoid vulnerabilities. At a given point you will go a year or two without any updates and that would be a sign that it is time for a new router.
I’ve read through a lot of information now, but I’m still not quite sure – What is a router?
The router is the pathway between your home devices (PC, Mac, iPhone) and another network. It connects your hardware with the world.
Think of it as a hardware highway. The modem connects the road in front of your house to the highway, while the router is like gasoline, which ensures your car will get from point A to point B as quickly as possible.
The path of the router is predetermined; however, that does not mean that the router does not know all the possible routes. It is just really quick at finding the fastest way!
Believe it or not, the first router was brought into operational state back at the beginning of 1974; however, routers history predates back to the late ’50s, when the development of the internet began. The first home routers were limited to 100 Mbps upload and download, which was the maximum at the time.
The technology has gone a long way since. Now you can do a wealth of awesome stuff you couldn’t before – playing your favorite game online, sharing great videos and memories – whatever you want, practically.
It’s a necessity to be informed about your home network environment, so you can optimize your personal experience and stay protected from malicious attempts to break into your private information.
In conclusion, the router definition is absolutely oriented towards making life easier and simpler. Knowing what a router is gives you an edge in making the most out of that. With all this information, you can pick the best router for your goals.
Look at you, we began with “what is a router” and now we’re already talking about deep stuff like encryption standards.
Hopefully, this article was useful to you.
A device that forwards data packets to the appropriate parts of the computer network. These packets, like real-life ones, are how the payload gets transported. Routers direct traffic and ensure the data gets transferred from one network to another.
The router is an essential part of how the internet works. It allows you to send emails around the world in a matter of seconds. The router finds the designated IP address and delivers the data. The device is used to transmit data and genuinely provide the best internet experience.
A device that performs the functions of a router, but also includes the ability to access a wireless point. It’s well known in modern days, as a lot of devices (mobiles, tables, laptops) utilize a wireless signal, instead of being directly wired to a router.
There are three main types of routers in the market: broadband, wireless and other types (edge, core, virtual, brouter)
A device or program, which enables a computer to transmit data over time. A modem converts digital information to analog and vice-versa.
The modem connects you to the internet, while the router connects your devices to Wi-Fi. The two are quite similar and often mistaken. You can get both the modem and the router in a single device.
You can get a decent router for less than 50 dollars, but the best ones will probably cost you around 300-400 dollars, especially if you are planning on streaming etc. If you only need a router to be able to browse the internet, a more affordable one will do just fine.
Delivering data from point A to point B (your PC to mobile in Japan) in the fastest way possible. Simply making your life easier.
Most manufacturers use 192.168.0.1 or 192.168.1.1 as a default LAN IP address. Otherwise, the easiest way to find out what your router IP is would be to Google search it.
There are input and output ports, a processor, and switching fabric. Feel free to watch some more detailed materials if you want to go more in-depth.
A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP. Routers are located at gateways; they use headers and forwarding tables to determine the best path, through which to forward the packets. Protocols such as ICMP are used for this communication. Very little filtering of data gets done through routers.
Not necessarily. Again, it depends on the type of router you’re planning on getting. An expensive router will certainly provide you with the best signal. A high-end router with an internal antenna can way out-perform a cheaper one with an external antenna.
The connection is typically up to 150 feet indoors and 300 feet outdoors. However, walls and other objects can significantly reduce the range.
There are several steps you can take to ensure your network security. Essentially, make sure you change your default router username and password (longer is better – up to a 16-digit combination of capital, small letters, digits, and symbols). Turning off the WPS and UPnP are also good ways to protect yourself. Finally, opt for a WPA2 with AES Wi-Fi encryption. | <urn:uuid:abfd8c5e-c1ca-468d-b674-60ec3a574849> | CC-MAIN-2020-29 | https://techjury.net/blog/what-is-a-router/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00163.warc.gz | en | 0.937715 | 3,639 | 2.9375 | 3 | {
"raw_score": 1.9437483549118042,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Hardware |
Thanks to the internet, there are tons of resources for learning how to code on your own. It can sometimes be overwhelming as far as finding out where to start and which source to put your full trust into. Any tutorial or resource is usually a great start, but I’m going to share my favorite resources so far for teaching children how to code. I will also include a resource for older teens, young adults, and even adults to use (CodeAcademy).
Code Studio is a fun and interactive way for children of all ages to get into programming. I would say it’s the more “modern” approach for children today since they use things like Minecraft and Flappy Bird to grab the interest while learning. You can create an account to keep up with completed activities, or just get on the site and try some activities without signing up.
Scratch programming allows you to learn the basics of programming and logical thinking without jumping straight into the confusing world of code. It uses drag and drop blocks in order to create a sequence of events. These events can be used to create animations, drawings, games, and more. This is usually used by children ages 8 to 16, but can complex enough for all ages.
Code Academy is a great to learn code which involves interactive learning. The above 2 resources (Scratch Programming and Code Studio) are aimed for a younger audience because they use games to capture and retain the interest while teaching the fundamentals of code. Code Academy is the next step to take, or at least that is how I got my start. You can choose to start learning how to develop websites or jump straight into server side programming languages like Python. Code Academy is interactive enough to feel like a game so you are learning code while having that feeling of accomplishment after each project.
Let’s Wrap Up!
Learning to code is a powerful tool to have under your belt in this day in age. Whether you are in elementary school, high school, college, working, or retired, it’s never too late to learn how to code. If you are wanting to teach your child how to code, I would suggest starting off with either Scratch programming or Code Studio. If they seem to breeze through those sites, or you are old enough not to use games or drag and drop boxes, I would suggest visiting Code Academy to learn how to program or build websites.
If you are interested in learning more about how this topic can help your business, please contact our agency on our contact page or call us at 1-888-964-4991. We publish a new article once or twice per month so make sure to follow us on social media and allow for push notifications if you want to stay in the loop with our agency and digital marketing.
About the Author
Self taught web developer who has proven to excel in a variety of areas. He started his career with Future Design Group as a web developer and has strong input in the operations of social media marketing. Esteban is constantly studying and researching digital marketing trends to keep up to date, and he always has a theory/educated prediction on the future of digital marketing. | <urn:uuid:beea1b6b-41cf-4752-a0ff-9f3710478e32> | CC-MAIN-2017-22 | https://www.futuredesigngroup.com/blog/learning-to-code | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612327.8/warc/CC-MAIN-20170529130450-20170529150450-00344.warc.gz | en | 0.948334 | 643 | 2.953125 | 3 | {
"raw_score": 1.674587607383728,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Software Dev. |
Despite the fact that many people are aware of the link between your exercise performance and your diet, many more are not. It is possible that your nutrition has a substantial impact on your exercise performance. Athletes who compete in professional sports have their diets prepared by nutritionists throughout the season.
A nutritionally packed diet is a must-have component of your workout routine if you’re serious about becoming your strongest and fittest self. With the proper dietary guidance, you can ensure that you provide your body with the long-lasting energy that it requires to perform at its best.
Proper And Effective Fitness Nutrition
Consuming a well-balanced diet can assist you in obtaining the nutrients necessary to fuel your everyday activities, which may include daily exercise. Below are the practical nutrition tips you can try to take your fitness to another level.
Do Not To Skip Meals
One of the most common mistakes that is made by many to achieve their fitness goals is eating too little or skipping meals entirely. This is an ineffective strategy for long-term weight management and weight loss. Your body requires fuel to function correctly.
One way to ensure you have enough energy and avoid blood sugar spikes or decreases is to spread your meals throughout the day and include nutrient-dense foods such as whole grains, healthy fats, and lean protein.
Before You Begin Your Workout, Fill Up Your Tank
Are you someone who leaps out of bed at the crack of the morning and runs on an empty stomach? Then that is something prudent we have there. Running before breakfast can significantly aid in fat loss. However, before engaging in intense exercise such as lifting weights, you should replenish your energy levels.
Without this, you will be unable to give your workout your all and increase your performance. Low blood sugar levels can cause weariness and dizziness. Before your activity, a high-carb snack (such as a banana or an apple) is ideal for refilling your glycogen stores (fuel for muscles that are stored in your body).
Sufficient Supply Of Electrolytes
If you prefer shorter, more relaxed workouts then drinking enough water throughout your exercise routine should suffice. However, if you exercise for more than an hour at a high level, you should also eat a drink with comparable sugar and salt concentrations to those found in the body.
Carry a quickly absorbed drink and can offer your immediate body fuel. Electrolyte drinks should contain between 60 and 80 grams of carbs and 400 and 1000 milligrams of sodium per liter. Additionally, they may contain calcium, magnesium, and potassium. In this way, it can assist you in rapidly replenishing electrolytes lost through sweating.
Protein-Rich Snacks And Meals
Protein is necessary to keep the body growing, maintaining, and repairing itself. According to University of Rochester Medical Center, red blood cells expire after approximately 120 days.
Protein is also essential for muscle growth and repair, which allows you to reap the full benefits of your workout. Despite the fact that it can give energy when carbohydrates are scarce, it is not a significant fuel source when participating in physical exercise.
According to the Harvard Health Blog, adults should take approximately 0.8 grams of protein per kg of bodyweight on a daily basis to maintain a healthy weight. Approximately 0.36 gram of protein per pound of bodyweight is obtained from this source. Thus, purchasing additional source of nutrients such ad supplements to obtain the necessary protein. Prescription discount card and other coupons will help you save a significant amount of money in case you have a prescribed dietary supplements.
The majority of diet plans focus on the number of calories consumed each day. Numerous free apps and websites compute the number of calories you should consume based on your activity level, the amount you should consume to maintain your weight, and the amount you should consume to lose weight.
For instance, BMI calculators and calorie calculators. This can be used to determine your basic nutritional and calorie requirements. Comparing your food diary entries to the calculator results can be eye-opening.
Another excellent resource for tracking your caloric intake is a free app that can be downloaded to your smartphone or tablet. It gives you access to the world’s most extensive nutrition and calorie database, containing millions of different foods. It’s a simple and convenient way to track the calories in the food you eat on the go!
With the help of practical and healthy nutrition tips, you can make significant improvements in your eating habits and push your fitness to new levels. To be sure, monitoring your food intake and what you consume is essential if you want to live a better lifestyle. With the valuable tips provided above, it’s simple to make small changes that can significantly influence your overall health. | <urn:uuid:2b735ec7-9e7f-4603-a3ec-3799d99348da> | CC-MAIN-2022-21 | https://www.nutriinspector.com/healthy-living/take-your-fitness-to-the-next-level-5-effective-nutrition-tips/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00290.warc.gz | en | 0.941459 | 976 | 2.640625 | 3 | {
"raw_score": 1.7337924242019653,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Health |
Its the end of the summer season; two days before land was flaming high; if you pour omelet solution then it has the potency to cook crispy omelet! Now, gradually season is getting cool. This time several fruits like mangoes, litchi, oranges, watermelon etc are heavily seen in markets.
Humid climate requires such juicy fruits to quench thirst and to prevent dehydration. Everyone loves to have juicy and tasty fruits. Talking about mangoes, then it is also known as the “King of Fruits”.
The luscious, tempting and juicy texture makes it much adorable and yummy to eat. Mangoes are eaten in both forms; raw which is greenish in texture and tastes tangy and sour; ripe one which is juicy, tempting, sweet and pulpy.
Mango leaves are considered as sacred so it is used in all kinds of auspicious occasions, worshipping, new-house ceremony and many more. Only a few of us knows that this very tree has numerous medicinal benefits hidden in it.
Also referred to as “King of Fruits”; mangoes are the edible fruit with numerous health benefits. mostly, it is oval in shape and can weigh up to 6 kilograms. While unripe and in developing stage, mangoes are cute, greenish and ball-king of fruit. It is a native plant of India of the eastern group of islands.
In India, varieties of mangoes are found with different names, shapes, texture, and constituency. Those which are grown by sowing seeds in fields are called ‘Desi aam’. They are less fibrous hence the juice extracted is not very thick.
Nomenclature and Appearance:-
Mango belongs to kingdom Plantae and ‘Anacardiaceae’ family. Its binomial name is “Mangifera indica” which is Latin in origin. In Hindi, it is called ‘aam’. Height of mango tree reaches up to the height of 30-100 ft. The leaves are elongated with pointed tip.
The flowers are small and looks greenish-yellow. Blossoms are very fragrant which matures into fruit. Fruit grows in varied shapes; it can be long, round, oval etc. whereas the developing colour also varies to green, yellow, red or can be the mixture of these colours.
Nutrients in Mango:-
When the mango is called as the king of fruits; then it is obvious that it must be rich in nutrients. It is rich in dietary fibers, Vitamin A, C, E, K, B1, B2, B6, B9, beta-carotene, calcium, iron, magnesium, phosphorus, zinc, sodium and many more.
Medicinal benefits of Mango:
Everyone likes mango but only a few are aware and conscious about its medicinal benefits. Now, let’s know how this cute fruit is beneficial in many kinds of ailments.
Cures intestinal worms
Only few of us use natural techniques to cure intestinal worms. In addition, we provide heavy allopathic medicines which can be very harmful for future grievances. Sundry unripe mango seeds and grind it to fine powder.
Give 250-500 gm of this powder with curd or water twice a day. It is very helpful and cures the intestinal worms with a great ease.
For urinary disorders
Take 25gm of fresh bark and grind it solid. Soak this in 1 glass of water for overnight. Next morning, crush the bark; strain the solution and drink twice a day. You will notice the results in 1 week only. You can also take the powder of dried leaves twice a day.
To boost your immune system, eat 1 sweet mango at morning and at the same time have milk cooked with ginger root powder and dates also called ‘chhuara’. It strengthens your immune system making your body fit and strong.
Take unripe mangoes and crush them; extract its juice and of the total quantity take one-fourth of methylated spirit or local wine. Mix well with the pulp extract and store it in a bottle. Don’t use it for the first two days; apply this to old eczema. It is also helpful to cure severe wounds,
For insect bite
Apply the paste of mango seeds or dried mango powder on the boils of bite. It heals the bite; mostly of spider bite. Pluck the flowers from the mango tree; on Tuesday or Sunday after feeding a Brahmin. Smear it on both hands and massage the area of insect bite. It immediately relieves the patient from acute pain.
Take approximately 10 gm of dried mango leaves and boil in half liter water; until the water gets lesser than half. Filter it and drink this solution twice a day. Its a fact, by doing so; diabetes gets cured completely.
For liver problems
In case of any kind of liver problem, take 6-8 dried mango leaves under shade and boil it in 250gm of water; when water gets half. Strain it and add little amount of milk to it. Give this solution to the patient in morning.
She is a learner and health conscious person. She is always interested in seeking knowledge regarding Ayurveda, herbal medicine and Yoga to cure any health ailments naturally.
She holds a Bachelor Degree in Alternative Medicine and interested in Naturopathy. | <urn:uuid:95fe2a7a-9fc4-4484-ad69-50d27e5cf1b5> | CC-MAIN-2019-09 | https://www.theayurveda.org/ayurveda/herbs/medicinal-benefits-of-summer-refreshing-mangoes | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249530087.75/warc/CC-MAIN-20190223183059-20190223205059-00163.warc.gz | en | 0.952226 | 1,123 | 2.671875 | 3 | {
"raw_score": 1.4805480241775513,
"reasoning_level": 1,
"interpretation": "Basic reasoning"
} | Health |
Little is known regarding the development of this species. Females lay eggs in moist, terrestrial burrows or crevasses in late spring or early summer. All development occurs within the eggs, thus there is no aquatic larval stage. The young emerge 2 to 3 months later as sub-adults. Juveniles measure around 20 mm in length at one year of age and are oftentimes found under logs. Juveniles become reproductively mature at 4 to 5 years old, at which time they measure 50 to 76 cm snout-vent length (SVL). (Beamer and Lannoo, 2010)
During the spring, male (Beamer and Lannoo, 2010)search for female mates typically underneath logs. Once a male finds a female mate, he places his nasolabial grooves and mental glands against the female’s body. The male displays a foot dance in which he raises and lowers his rear limbs simultaneously or alternately. The male then moves towards the female’s head while repeatedly rubbing his nasolabial grooves on the female. Once the male reaches the female's head he rubs his mental gland over her head and nasolabial grooves. The male then places his head under her chin and attempts to pass beneath her, waving his tail as it passes under the female’s mouth. When the male stops moving forward, the female grabs on to his tail and then the pair move forward while the female is grasping onto the male. The pair continues to move forward until the spermatophore is deposited. No mate defense has been observed for this species.
is a terrestrial species and completes its entire life cycle on land. It is also a lungless species and breaths through its skin and membranes of the mouth and throat. White-spotted slimy salamanders are named for their spotted appearance and defensive strategy of secreting a very sticky substance from its skin glands that is extremely difficult to remove.
White-spotted slimy salamanders are generally solitary, but will congregate under optimal cover objects to avoid dessication during dry periods. Females and juveniles are much more likely to share a cover object than multiple, territorial males.
White-spotted slimy salamanders may be active during the day or night, but are most active during rain events and at night. Little is known regarding migratory movements, but studies have shown that individuals move no more than 90 meters. Distance moved seems to correlate with age and more specifically, reproductive maturity. Juveniles move less than 6 m, whereas salamanders between 55 and 65 SVL moved the most. This length is most seen in individuals that have recently reached reproductive maturity and are likely moving in search of mates. (Beamer and Lannoo, 2010; "Northern Slimy Salamander", 2007)
Territory size is not well documented in this species, but males rarely will occupy the same cover object. The maximum recorded distance traveled for an individual is 91.5 m, but most adults do not move more than 9 m.
Two North American snakes are known predators of Thamnophis genus) and copperheads (Agkistrodon contortrix) feed on white-spotted slimy salamanders. All species of the Plethodon genus produce noxious skin secretions as predator defense. White-spotted slimy salamanders produce copious amounts of slime which often gum up a predator's mouth, giving the salamander a chance to escape. become immobile when physically contacted, making them less likely to become detected by visually oriented predators. (Beamer and Lannoo, 2010). Garter snakes (
Cryptobia borreli, Eutrichomastix batrachorum, Haptophyra gigantean, Haptophyra michiganensis, Hexamastix batrachorum, Hexamitus intestinalis, Karotomorpha swezi, Prowazekella longifilis, Tririchomonas augusta, Brachycoelium hospitae, Capillaria inequalis, Cosmocercoides dukae, Oswaldocruzia pipiens, Oxyuris magnavulvaris, Acanthocephalus acutulus, and Hannemania dunni. (Hairston, 1987)impact their communities with their burrowing by contributing to the dynamics of the soil. They dig and break up the soil to increase aeration. White-spotted slimy salamanders also are host to many internal parasites including:
There are no positive effects of (Beamer and Lannoo, 2010)on humans.
There are no adverse effects of (Beamer and Lannoo, 2010)on humans.
Stephen Wettstein (author), University of Michigan-Ann Arbor, Phil Myers (editor), University of Michigan-Ann Arbor, Rachelle Sterling (editor), Special Projects.
living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico.
uses sound to communicate
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
an animal that mainly eats meat
animals which must use heat acquired from the environment and behavioral adaptations to regulate body temperature
parental care is carried out by females
union of egg and spermatozoan
forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality.
having a body temperature that fluctuates with that of the immediate environment; having no mechanism or a poorly developed mechanism for regulating internal body temperature.
the state that some animals enter during winter in which normal physiological processes are significantly reduced, thus lowering the animal's energy requirements. The act or condition of passing winter in a torpid or resting state, typically involving the abandonment of homoiothermy in mammals.
An animal that eats mainly insects or spiders.
fertilization takes place within the female's body
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
A large change in the shape or structure of an animal that happens as the animal grows. In insects, "incomplete metamorphosis" is when young animals are similar to adults and change gradually into the adult form, and "complete metamorphosis" is when there is a profound change between larval and adult forms. Butterflies have complete metamorphosis, grasshoppers have incomplete metamorphosis.
Having one mate at a time.
having the capacity to move from one place to another.
the area in which the animal is naturally found, the region in which it is endemic.
active during the night
reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body.
chemicals released into air or water that are detected by and responded to by other animals of the same species
breeding is confined to a particular season
remains in the same area
reproduction that includes combining the genetic contribution of two individuals, a male and a female
digs and breaks up soil so air and water can get in
uses touch to communicate
that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle).
Living on the ground.
defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement
movements of a hard surface that are produced by animals as signals to others
uses sight to communicate
eNature.com. 2007. "Northern Slimy Salamander" (On-line). eNature.com. Accessed February 22, 2010 at http://www.enature.com/fieldguides/detail.asp?allSpecies=y&searchText=slimy%20salamander&curGroupID=7&lgfromWhere=&curPageNum=1.
DLIA/ATBI. 2010. "Plethodon glutinosus (Green), Northern slimy salamander - Biodiversity of Great Smoky Mountains National Park" (On-line). Discover Life in America. Accessed April 01, 2010 at http://www.dlia.org/atbi/species/Animalia/Chordata/Amphibia/Urodela/Plethodontidae/Plethodon_glutinosus.shtml.
Virginia Department of Game and Inland Fisheries. 2010. "white-spotted slimy salamander (Plethodon cylindraceus)" (On-line). Virginia Department of Game and Inland Fisheries. Accessed April 01, 2010 at http://www.dgif.virginia.gov/wildlife/information/?s=020080.
Beamer, D., M. Lannoo. 2010. "Plethodon cylindraceus" (On-line). AmphibiaWeb: Information on amphibian biology and conservation. Accessed February 22, 2010 at http://amphibiaweb.org/cgi-bin/amphib_query?rel-common_name=like&rel-family=equals&rel-ordr=equals&rel-isocc=like&rel-description=like&rel-distribution=like&rel-life_history=like&rel-trends_and_threats=like&rel-relation_to_humans=like&rel-comments=like&rel-submittedby=like&query_src=aw_search_index&max=200&orderbyaw=Family&where-scientific_name=plethodon+cylindraceus&where-common_name=&where-subfamily=&where-family=any&where-ordr=any&where-isocc=&rel-species_account=matchboolean&where-species_account=&rel-declinecauses=equals&where-declinecauses=&rel-iucn=equals&where-iucn=&rel-cites=equals&where-cites=&where-submittedby=.
Bruce, R., R. Jaeger, L. Houck. 2000. The Biology of Plethodontid Salamanders. New York, New York: Kluwer Academic/Plenum Publishers.
Carr, D. 1996. Morphological Veriation among Species and Populations of Salamanders in the Plethodon glutinosus Complex. Herpetologica, Vol. 52, No. 1: 56-65. Accessed February 22, 2010 at http://www.jstor.org.proxy.lib.umich.edu/stable/3892956.
Diagram Book, T. 2003. Animal anatomy on file. New York: Facts On File.
Duellman, W. 1999. Patterns of distribution of amphibians : a global perspective. Baltimore, MD: Johns Hopkins University Press.
Francis, E., F. Cole. 2002. The anatomy of the salamander. Salt Lake City, Utah: Society for the Study of Amphibians and Reptiles.
Hairston, N. 1987. Community ecology and salamander guilds. Cambridge (Cambridgeshire): Cambridge University Press.
Hickman Jr., R., L. Roberts, S. Keen, A. Larson, D. Eisenhour. 2009. Animal Diversity 5th Edition. New York, New York: The McGraw-Hill Companies, Inc..
Highton, R. 1995. Speciation in Eastern North American Salamanders of the Genus Plethodon. Annual Review of Ecology and Systematics, Volume 26: 579-600. Accessed February 22, 2010 at http://arjournals.annualreviews.org.proxy.lib.umich.edu/doi/abs/10.1146%2Fannurev.es.26.110195.003051.
Highton, R., G. Maha, L. Maxson. 1989. Biochemical evolution in the slimy salamanders of the Plethodon glutinosus complex in the eastern United States. Urbana: University of Illinois Press.
Mitchell, J., S. Rinehart, J. Pagels, K. Buhlmann, C. Pague. 1997. Factors influencing amphibian and small mammal assemblages in central Appalachian forests. Forest Ecology and Management, Volume 96, Issues 1-2: 65-76. Accessed February 22, 2010 at http://www.sciencedirect.com.proxy.lib.umich.edu/science?_ob=ArticleURL&_udi=B6T6X-3RHMS12-H&_user=99318&_coverDate=08%2F15%2F1997&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1217347122&_rerunOrigin=scholar.google&_acct=C000007678&_version=1&_urlVersion=0&_userid=99318&md5=9dd8cb4e9878ed99cd433e76a78ff73e.
Pope, C., S. Pope. 1949. Notes on growth and reproduction of the slimy salamander Plethodon glutinosus. Chicago: Chicago Natural History Museum.
Powell, R., J. Collins, E. Hooper. 1998. A key to amphibians and reptiles of the continental United States and Canada. Lawrence: University Press of Kansas.
Roth, G. 1987. Visual behavior in salamanders. Berlin: Springer-Verlag.
Taggart, T. 2010. "White-Spotted Slimy Salamander" (On-line). CNAH - The Center for North American Herpetology. Accessed February 22, 2010 at http://naherpetology.org/comments.asp?id=968.
Westfall, M., K. Cecala, S. Price, M. Dorcas. 2008. Patterns of Trombiculid Mite (Hannemania dunni) Parasitism among Plethodontid Salamanders in the Western Piedmont of North Carolina. Journal of Parasitology, Volume 94, Issue 3: 631-634. Accessed February 22, 2010 at http://www.bioone.org.proxy.lib.umich.edu/doi/abs/10.1645/GE-1260.1. | <urn:uuid:e8bf9239-8e31-4628-a1a9-badffaaa4a17> | CC-MAIN-2017-34 | http://animaldiversity.org/site/accounts/information/Plethodon_cylindraceus.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102967.65/warc/CC-MAIN-20170817053725-20170817073725-00562.warc.gz | en | 0.861555 | 3,165 | 3.53125 | 4 | {
"raw_score": 2.465759515762329,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Science & Tech. |
The eyes are one of the most important parts of the body because if not for them, we will not be able to enjoy the world, as we know it. The eyes allow us to experience the richness of life by letting us see the beauty around us that makes us appreciate everything even more. Since the eyes are very essential, we must then learn to protect them to keep them healthy always.
There are several ways on how we can maintain good eyesight and if only we can follow these steps, then maybe there is no more need to even go to an optometrist and you save yourself from having an extra expense for a costly pair of eye glasses.
Ways to keep healthy eyesight
Having a good and a healthy diet that consists of fruits and vegetables can do a lot in maintaining healthy eyesight. You can eat foods that are rich in Vitamin C such as broccoli, bell peppers and brussel sprouts. You can also include in your diet some foods that are high in Omega-3 fats like salmon and sardines. Other foods like sweet potatoes and spinach are also best for the eyes as they contain beta-carotene and also lutein for the spinach.
Aside from having a healthy diet, it would also be great if you can do eye exercises because this way you can reduce eye strain as well and be able to relax your eyes. Having a balanced weight and not smoking can help in keeping healthy eyesight and at the same time, these factors can give you more health benefits as well.
More tips for a better eyesight
If you are wearing contact lenses, make sure that you do not forget to remove them before you go to sleep, and it is also better if you do not wear them when you go swimming except if you are wearing goggles. Also, it is good if you can wear protective sunglasses to protect your eyes from the sun’s ultraviolet rays.
When you have to use eye drops for some reason, use only the needed dosage because using more than what is only required may cause more harm. Looking at the computer screen for a long time may cause dry eyes so it is suggested that you consciously blink at least every 30 seconds to prevent this from happening.
Use a good light as well when you read so as not to damage your eyesight, and if you feel that your eyes are tired, allow them to rest too. Another thing that you must not do is to not look directly at a bright light.
Putting cucumber over your eyes for at least a few minutes before you go to bed can reduce puffiness so it may also be a great idea to include this in your beauty regimen. Now that you have an idea on how to take care of your eyes in the most practical way, do still visit your optometrist Sydney to make sure that your eyes are perfect. | <urn:uuid:f7c8d7e7-0049-4e42-bc9e-dc55fda501f6> | CC-MAIN-2020-16 | http://www.seconomics-project.eu/caring-for-your-eyes/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00182.warc.gz | en | 0.970637 | 577 | 2.9375 | 3 | {
"raw_score": 0.994810938835144,
"reasoning_level": 1,
"interpretation": "Basic reasoning"
} | Health |
Twelve years before it’s supposed to be finished, the California High-Speed Rail (HSR) system is already outdated. While the state HSR Authority focuses on building infrastructure that dates back to the Kennedy era, the private sector has moved on. Elon Musk – who called High-Speed Rail “one of the most expensive per mile and one of the slowest in the world” – has proposed an alternative. In 2012, Musk, the California entrepreneur behind SpaceX and Tesla, offered the template for what he called hyperloop – a pneumatic tube through which people and cargo can rapidly move between cities.
Musk released his specs into the wild – theory, diagrams, the works – and declared it open source. The range of companies now involved in building and testing hyperloops sends a clear message to intercity transport enthusiasts – the government isn’t essential to the transit revolution.
In January 2017 three teams of researchers (from MIT, WARR in Germany, and Delft in the Netherlands) all successfully completed SpaceX’s first ever hyperloop pod competition. Over a short distance of 0.75 miles, WARR’s pod was the fastest in the tube, reaching a speed of 58 mph and coming to a successful stop. Over a longer distance the pods would have greater time to accelerate. By Musk’s projections, the pods would carry people at speeds of up to 760 mph and run from San Francisco to Los Angeles in 30 minutes and would cost roughly $6 billion dollars to build. The High Speed Rail is supposed to make the same trip in two hours and forty minutes (but won’t do so) and at a construction cost of $64 billion.
Los Angeles-based Hyperloop One, with its team of 200 engineers, has raised more than $160 million in capital and has completed the development of a test track in Nevada. The firm has also signed an agreement with DP World, the world’s third-largest port operator, to conduct a hyperloop feasibility study for freight movement in the UAE. Additionally, Dubai’s Road and Transport Authority commissioned Hyperloop One to do a feasibility study on potential passenger routes.
Another company, Transpod, expects to have a commercial prototype by 2020. Though Transpod has raised less money than Hyperloop One, it successfully closed a seed round of $15 million in 2016 and is looking to build its prototype in Canada. Hyperloop Transportation Technologies, a company with over 600 researchers that has raised $100 million in investment capital, is also exploring track locations in Australia. What’s weird about all this: California companies are flocking elsewhere to build and develop a technology conceived in California. With the High-Speed Rail Authority in the way, this comes as no surprise.
In just five years the hyperloop has progressed from a seemingly unreachable idea to initial testing – far more than what can be said for the California High-Speed Rail project. The state project is relying on technology first implemented in Japan in 1964. It’s a far cry from what Governor Jerry Brown called a “21st-century transportation system.” Cost has become another problem. In 2008, the estimated cost of the High Speed Rail project was $45 billion, but in its most recent business plan the HSR Authority quoted a cost of $64 billion to implement a much less extensive system. The railway project has a $40 billion funding gap and, despite seeking private funding, not a single investor has come forward.
Overall, the California’s taxpayers should not be propping up the obsolete railway system and the government should instead clear the way for a new future of transportation. As private companies, such as Hyperloop One, take matters into their own hands, there is no need for the government to develop intercity transportation. With the next hyperloop test competition on August 25-27th of 2017 and a total of 22 teams, it is important to ask “is California’s High-Speed Rail Authority stuck in the past?”
This fall California Policy Center and the Lincoln Network will sponsor a forum on Hyperloop and California High-Speed Rail in San Francisco. Check back in a few weeks for further details. | <urn:uuid:aead1ebc-f864-4c36-b760-9c4019297a5a> | CC-MAIN-2018-09 | https://californiapolicycenter.org/elon-musks-hyperloop-makes-browns-bullet-train-obsolete/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816178.71/warc/CC-MAIN-20180225070925-20180225090925-00762.warc.gz | en | 0.952539 | 863 | 3.03125 | 3 | {
"raw_score": 2.9203431606292725,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Transportation |
Fashion is a powerful tool, especially for someone in the public eye as much as the first lady. For decades, these women have used garments like lace dresses, low-cut tops, or the famous pantsuit (hi, Hillary!) to communicate with the American people. When successful, a first lady can use her clothing to her advantage to relay a message, other times not so much. Take a trip down memory lane with 40 leading ladies and the fashion that made a statement during their time in the White House.
As one of the richest women of the late 18th century, Martha Washington had more than enough room to experiment with fashion. She was able to choose between the finest fabrics for her gown, cloak, headpiece, and gloves, as seen here. Her most notable piece of fashion, royal purple silk wedding shoes from her wedding to George, is considered "the Manolo Blahniks of her time."
Unlike other first ladies, Abigail Adams actually rejected French fashion, opting for high-up embroidered collars. In a letter to her sister, she wrote about her agreement with a local preacher against the latest fashion, noting that he "thinks there are some ladies in this city, who stand in need of admonition, and I fully agree with him."
After her mother passed away when she was young, Martha Jefferson Randolph assumed the role of first lady when her father, Thomas Jefferson, took office in 1801. Though she wasn't often at the White House, she usually wore the latest Victorian fashions like a frilly hat with a purple bow.
As a former Quaker, Dolley Madison was used to wearing more modest clothing, but that changed when she left the faith. She then started wearing low-cut dresses made famous during the Napoleonic Era that were rich in color, with fabrics that made her "look like a Queen" to spectators.
Before her husband became president, Elizabeth Kortright Monroe lived abroad in Paris and London for four years. Used to European fashion, she usually wore cap sleeve dresses and shawls at White House functions. Her adoption of French clothing combined with her physical beauty earned her the nickname, “La Belle Americane.”
Louisa Catherine Adams didn't like to follow society rules, and is said to be the first first lady to wear makeup, using homemade face powder and lipstick against her husband's wishes. She often was forced to wear dark dresses that contrasted with her pale skin, making her want to use the makeup so she wouldn't be “a fright in the midst of the splendor.”
Helping her widowed uncle, President Martin Van Buren, Angelica Singleton Van Buren became the first lady at 21 years old. Keeping up with the trends of the time, she liked to wear her hair in tight ringlets, often using feathers as hair accessories with off-the-shoulder gowns.
Like most women of the 19th century, Sarah Childress Polk was obsessed with Parisian fashion. She reportedly wore elegant gowns and headdresses imported from France, made from expensive material of velvet, satin, and silk, which were often decorated with imported fringe, ribbons, and lace.
Very conscious of her appearance to others, Abigail Powers Fillmore hired someone to do her hair and design special dresses for public occasions. She was the first first lady to have items made on a sewing machine, hence why this dress pictured here is more advanced than previous first lady fashion.
On the way to her husband's inauguration, Jane Pierce got in a train accident that killed their 11-year-old son, Benjamin. Jane then spent the first two years as first lady in mourning, only wearing black dresses and accessories like the ones seen here.
The niece of James Buchanan is considered to be the Jackie Kennedy of her time. Most notably, she made national headlines for her "very" low-cut European-style dress that she wore to her uncle's inauguration. The dress, pictured here, was a hit among women, and bodices dropped an inch or two almost instantly.
Like we said in the previous slide, Lane Johnston's dress was a hit. The next first lady, Mary Todd Lincoln, loved the dress style so much she wore something similar to her husband's inauguration. As you can see, she liked her items lavish and is said to have gone $20,000 over the Congressional budget due to her spending habits.
Eliza McCardle Johnson, like many other first ladies, didn't want much publicity. Therefore, she usually wore more conservative items like dark dresses with high collars and shawls concealing most of her hair.
According to the National Museum of American History, Julia Dent Grant is said to have chosen American-made clothing that were “… becoming to my person and the condition of my purse.” This usually meant rich fabrics with some jewelry made of pearls or diamonds.
Sticking to the modest clothing trends of the time, Lucy Webb Hayes usually wore modest embroidered dresses in soft colors that covered her throat and arms.
While she may have been first lady for only a short period of time, a.k.a. around six months, Lucretia Garfield kept up with the latest fashion. She wore a lavender gown with a high collar to her husband's inaugural ball in 1881, as seen here.
Frances Folsom Cleveland was a rule-breaker and caused many controversies when she continually donned dresses that showed off her bare neck, shoulders, and arms. (I mean how gorgeous is this dress though?!) According to Time, the Women’s Christian Temperance Union got so fed up that they issued a petition asking her to stop wearing these dresses. She ignored them.
Caroline Scott Harrison's fashion choices as first lady deemed her, by The Philadelphia Times, "a sensible exemplar for American women." This was due to her modest wardrobe, featuring gowns with beaded details and floral patterns in neutral colors (almost always) made in America.
During a trip to Belgium, Ida Saxton McKinley was so shocked by what the workers went through to make the lace she bought, so she did as much as she could to help support them. According to the National First Ladies' Library, this meant a majority of her custom-made dresses featured a significant amount of lace. This inspired many other women to try to replicate the same look.
Edith Kermit Roosevelt liked her privacy and often wore the popular high-waisted dresses with trim skirts and gathered sleeves. She would often wear the same outfit over and over to throw off reporters and make them believe she had a larger closet than she did.
The "H" in "Helen Herron Taft" stands for "hats." Okay, maybe not like officially, but the former first lady was known to have a large collection back in the day. She was also the first first lady to donate her inaugural gown for public display.
It's said that Ellen Louise Wilson spent less than $1,o00 a year on outfits, which is something that would seem totally unheard of today. She often wore plain or patterned high-waisted dresses.
Woodrow Wilson's second wife mainly wore dark dresses, often with lace, but they were still highly fashionable. Most of her items came from the House of Worth in Paris.
Florence Harding often wore heavily-beaded dresses and fur pieces. This dress, pictured here, is so heavy that the dress has to be laid down sideways to avoid ruin when not on display. Crazy, I know!
Compared to her partner, Grace Goodhue Coolidge liked to make a statement and vocalized that through her clothing. She often wore sleek shift dresses in bright colors with outlandish hats. According to the National Museum of American History, her husband would surprise her and pick out her outfits.
During the Great Depression, Lou Henry Hoover kept things simple. She usually wore American-made dresses, emphasizing the importance of cotton clothing to promote the cotton textile industry.
Large hats were the staple of Eleanor Roosevelt's style. They were often worn with long skirts or dresses that kept up with the conservative aesthetic she wanted to accomplish.
Not used to being the center of attention, Elizabeth "Bess" Truman liked to wear pieces that allowed her to blend into the background and wouldn't be front-page news. This meant her wardrobe consisted of patterned shirtwaist dresses with tea-length skirts, as pictured here.
Mamie Doud Eisenhower wore this bubblegum pink shade so much during her time as first lady. It eventually became known as "Mamie pink" and was donned by most women in the '50s and early '60s.
During her time as first lady and for years after, Jackie O. designed most of her clothes. She's probably the most memorable fashionable first lady in history, and it's easy to see why. | <urn:uuid:298d5167-ee06-4d98-8fef-7c762992c12d> | CC-MAIN-2021-10 | https://www.marieclaire.com/fashion/g34161434/first-lady-fashion/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00227.warc.gz | en | 0.982049 | 1,842 | 2.703125 | 3 | {
"raw_score": 2.1444602012634277,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Fashion & Beauty |
Yesterday I sat here at the Convention on Biological Diversity (CBD) meeting in Nagoya, anxiously waiting for the official launch of The Economics of Ecosystems and Biodiversity (TEEB) report. Next to me sat Paulo Nunes, an economist leading the call for research worldwide on the contribution of biodiversity and ecosystem services to the global economy. After several years of collaboration, he is now a good friend and great research partner. “It seems like such a long time,” he says, almost reading my thoughts. And what an amazing journey has it been.
TEEB began in 2007 with a call from the world’s leading and emerging national economies for a “global study on the economic benefits of biological diversity, the cost of the loss and the failure to protect versus the costs of effective conservation.” An ambitious initiative inspired by the Stern Report—a 2006 publication examining the impacts of climate change on the economy—TEEB aimed to assess the value of biodiversity and ecosystem services, and use this data to inform development decisions. Over 500 scientists worldwide contributed to the report, including myself and several others from Conservation International (CI).
The synthesis of the TEEB final report was delivered by Pavan Sukhdev, head of the Green Economy Initiative at the UN Environmental Programme and one of the most charismatic, intelligent and articulate people I have ever met. The report makes one of the most compelling cases for global action on biodiversity protection: our economy depends on it. The report calls for society to make nature’s values visible, and for decision-makers and the business community to assess, communicate and take actions that incorporate the role of biodiversity and ecosystem services—from freshwater provision to nutrient cycling—in economic activities.
The report emphasizes a tiered approach to put sustainable development into practice by recognizing the value of ecosystem services to human communities, demonstrating such values in economic terms, and protecting these values with appropriate mechanisms and tools.
Here are just a few of the findings compiled in the report:
- At the current catch rate, the global fisheries industry is shrinking by US$ 50 billion every year.
- The forest patches adjacent to Costa Rican coffee plantations provide a pollination service that is equivalent to about 7 percent of the average farm’s income.
- Thirty million people worldwide are completely reliant on coral reef ecosystems for their food and livelihoods.
- By choosing to pay landowners in the Catskill Mountains to protect their watersheds, the New York City authorities were able to maintain the city’s freshwater supply without building expensive water treatment facilities—at a fraction of the price.
TEEB also identifies priority geographic areas to implement these efforts, specifically in terms of communication, valuation, measurements and management, poverty reduction, financial accounting disclosure, use of economic incentives, ecosystems conservation and restoration.
Here at the CBD, where delegations from 193 countries are negotiating and will hopefully agree on an ambitious target for an extensive coverage of protected terrestrial and marine area globally, CI is strongly urging agreement on the protection of at least 25 percent of Earth’s land and 15 percent of Earth’s oceans.
Pavan ended his speech to the CBD plenary audience by restating that valuing nature is about our future. I couldn’t agree more, and I am now anxiously waiting for the next phase. TEEB is just beginning its challenging role of working with governments, the business community and the rest of society to figure out how to truly value nature in development.
Rosimeiry Portela is an ecological economist and the senior director for global change and ecosystem services in CI’s Science + Knowledge division. She also coordinated CI’s contributions to the TEEB report. | <urn:uuid:f0cc40ee-ec38-4a9c-b90a-4bb1ab1552c5> | CC-MAIN-2013-20 | http://blog.conservation.org/2010/10/teeb-report-valuing-nature/comment-page-1/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706637439/warc/CC-MAIN-20130516121717-00046-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.927581 | 762 | 2.75 | 3 | {
"raw_score": 2.9608922004699707,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
World Cancer Day
February 4th 2017 is World Cancer Day – a global campaign to raise awareness, show support and unite to fight cancer.
In the UK there are roughly 356,860 new cases of cancer a year. 1 in 2 people in the UK will be diagnosed with cancer at some point in their lives.
And while that number may feel high, advances in research, treatment and technology mean survival rates are better than ever.
Unite against cancer
This year the campaign encourages people to unite together to build a future where every patient survives cancer.
Thanks to fundraising efforts survival rates have doubled in the last 40 years. Within the next 20 years Cancer Research UK hopes to see 3 in 4 patients surviving cancer.
Through prevention and awareness, treatment, and community support we can help people and their families through every stage of cancer.
World Cancer Day is a chance to come together, raise money and show everyone living with cancer they are not fighting alone. Cancer Research UK is asking people to wear Unity Bands to show support – the proceeds of which go towards life saving research.
More cancer resources
If you want to learn more about cancer or are concerned about a certain type of cancer, there are great resources on the web to help answer your questions. If you have questions about recognising symptoms and reducing risk, NHS Choices and Cancer Research are both excellent places to go for general information.
For an in-depth guide to understanding cancer, Macmillan has a number of useful guides.
You can also read our guides on Silversurfers – we’ve written about supporting a loved one through cancer, websites for cancer information, as well as breast cancer awareness, prostate cancer awareness and more.
What are your experiences with cancer? Will you wear a Unity Band for World Cancer Day? Share your thoughts in the comments below.
Silversurfer's Features Editor
Leave a Comment!
Community Terms & Conditions
These content standards apply to any and all material which you contribute to our site (contributions), and to any interactive services associated with it.
You must comply with the spirit of the following standards as well as the letter. The standards apply to each part of any contribution as well as to its whole.
be accurate (where they state facts); be genuinely held (where they state opinions); and comply with applicable law in the UK and in any country from which they are posted.
Contributions must not:
contain any material which is defamatory of any person; or contain any material which is obscene, offensive, hateful or inflammatory; or promote sexually explicit material; or promote violence; promote discrimination based on race, sex, religion, nationality, disability, sexual orientation or age; or infringe any copyright, database right or trade mark of any other person; or be likely to deceive any person; or be made in breach of any legal duty owed to a third party, such as a contractual duty or a duty of confidence; or promote any illegal activity; or be threatening, abuse or invade another’s privacy, or cause annoyance, inconvenience or needless anxiety; or be likely to harass, upset, embarrass, alarm or annoy any other person; or be used to impersonate any person, or to misrepresent your identity or affiliation with any person; or give the impression that they emanate from us, if this is not the case; or advocate, promote or assist any unlawful act such as (by way of example only) copyright infringement or computer misuse.
Nurturing a safe environment
Our Silversurfers community is designed to foster friendships, based on trust, honesty, integrity and loyalty and is underpinned by these values.
We don't tolerate swearing, and reserve the right to remove any posts which we feel may offend others... let's keep it friendly! | <urn:uuid:a311456f-8c19-49b6-898f-295849121e6d> | CC-MAIN-2017-26 | https://www.silversurfers.com/health/body/world-cancer-day/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322275.28/warc/CC-MAIN-20170628014207-20170628034207-00539.warc.gz | en | 0.93567 | 780 | 3.109375 | 3 | {
"raw_score": 2.2700932025909424,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Health |
Fluorescence microscopy is an imaging technique used in light microscopes that allows the excitation of fluorophores and subsequent detection of the fluorescence signal. Fluorescence is produced when light excites or moves an electron to a higher energy state, immediately generating light of a longer wavelength, lower energy and different color to the original light absorbed.
To visualize labeled molecules in the sample, fluorescence microscopes require a very powerful light source and a dichroic mirror to reflect light at the desired excitation/emission wavelength. The filtered excitation light then passes through the objective to be focused onto the sample and the emitted light is filtered back onto the detector for image digitalization.
Download our table comparing the options on the Nanoimager
Fluorescence microscopy is highly sensitive, specific, reliable and extensively used by scientists to observe the localization of molecules within cells, and of cells within tissues. Fluorescence imaging is reasonably gentle on the sample, which facilitates the visualization of molecules and dynamic processes in live cells. In conventional fluorescence microscopes, the light beam penetrates the full depth of the sample, allowing easy imaging of intense signals and co-localization studies with multi-colored fluorophores on the same sample.
Fluorescence microscopy can, however, limit the precise localization of fluorescence molecules, as any out-of-focus light will be collected. This can be resolved by using super-resolution techniques, which circumvent the limited resolution power of conventional fluorescence microscopy, which cannot distinguish objects that are less than 200nm apart.
The Nanoimager is a compact microscope that allows powerful imaging of fluorescence molecules with enhanced resolution and enriched visualization by using different super-resolution techniques, including dSTORM, PALM, single-particle tracking and smFRET. The Nanoimager relies on different modes of imaging, depending on the sample and optimal signal to noise ratio, and can use three illumination modes: epifluorescence, TIRF or HILO.
With the Nanoimager, two fluorophores can be captured simultaneously to improve understanding of how different molecules interact and with a total of four different fluorophores (four laser colors) to be used in a single sample. The Nanoimager significantly improves the resolution of fluorescence microscopy to up to 20nm and provides unrivalled stability, allowing the microscope to be used in any lab environment.
Our team of scientists are waiting to help with your questions | <urn:uuid:b582e301-7777-4491-97f1-9e3c6426862a> | CC-MAIN-2019-43 | https://oni.bio/fluorescence-microscopy | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00296.warc.gz | en | 0.879836 | 502 | 3.140625 | 3 | {
"raw_score": 2.7086033821105957,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
|Select magazine number|
75th anniversary of Holodomor — Great Famine of 1932–33
The word Holodomor — devastating famine, is used in Ukraine in reference to the Great Famine of 1932–1933 (similarly, the word Holocaust in English is used in reference to the massive destruction of human beings, Jews by the Nazis in WWII in particular). The year 2008 marked the seventy-fifth anniversary of the great tragedy.
Holodomor was artificially created and sustained in Ukraine by the soviet authorities, inspired by Stalin, to deal a mortal blow to the Ukrainian peasants who were firm and unyielding resisters to the soviet regime and the way the regime was running the economy, and agriculture in particular. Massive deportations were found to be not enough; then the vilest and most barbaric, yet unheard of method of mass destruction of people was applied — genocidal famine. All the food was taken away from peasants who lived in vast rural areas of Ukraine, all the stored grain were confiscated — and no food was delivered to the hunger stricken. Army and police cordoned off the villages to prevent people from seeking refuge and help elsewhere. Records of death were destroyed or hidden in the archives, witnesses and survivors were silenced, and as a result no statistics are available. The soviets never admitted officially that the horrendous crime ever took place. Neither did the soviet communist party and its leadership who are directly responsible for it never repented, apologized or said they were sorry. “It’s vicious anti-soviet propaganda, and is absolutely untrue,” was the tune they were singing for many decades.
After Ukraine had regained independence, research and studies revealed that up to seven, or even ten million people had died in the genocidal famine of unprecedented proportions. However, it was only Ukraine’s President Viktor Yushchenko (president since 2005) who expended a lot of effort to raise awareness of this great tragedy, and of the crime committed in 1932–1933, both in Ukraine and among the international community. Ukraine called upon the world to recognize Holodomor as an act of genocide directed against the Ukrainian people.
The occasion of the seventy-fifth anniversary of Holodomor was marked in November of 2008 by a string of events across Ukraine. They culminated in Kyiv, on November 21 and 22 in a series of events, of which the central one was the unveiling of the Holodomor Memorial complex in the vicinity of the eleventh-century Pechersk Lavra Monastery.
On November 18, the Book of Memory, the first in the ongoing series of publications devoted to Holodomor, was presented at the Ukrayinsky Dim Culture Center. The book contains 6,000 testimonies of those who survived the Holodomor famine; the number of names mentioned in the book of those known to die during the famine is 825,510. Over 10,000 people took part in compiling this book and those volumes which are devoted to the events of Holodomor in many Oblasts of Ukraine (there are nineteen volumes altogether so far). The total number of testimonies in the database of this Holodomor Martyrology is over 200,000. The work on collecting the testimonies and other data will be continued.
The presentation of the Book of Memory was attended by President Yushchenko of Ukraine. He also saw an exhibition, Holodomor 1932–1933. Genocide of the Ukrainian People, which was mounted by the Institute of National Memory.
A special postal stamp, devoted to Holodomor, was released.
From November 17 through November 22, the International Press Center Holodomor. 75th Anniversary was working at the Ukrayinsky Dim Culture Center.
On November 21, a number of documentaries devoted to Holodomor were shown at the Ukrayinsky Dim Culture Center. The film director Serhiy Bukovsky presented to the public his documentary Zhyvi (Still Alive). Archive footage is augmented by a footage shot at locations in several Oblasts of Ukraine, and in Wales, Great Britain, where the family of Garret Jones lived (Garret Jones was a British journalist who was the first to inform the world about the terrible famine in Ukraine back in March 1933).
One of the films that premiered at the Ukrayinsky Dim Culture Center was that of the US film director Bobby Leigh Holodomor 1932-1933; Genocide of the Ukrainian People.
On the same day Yury Lanyuk’s Oratory Skorbna maty (Grieving Mother), based on the poetry of Pavlo Tychyna, was performed at the National Philharmonic Society of Ukraine. The oratory is devoted to the tragedy of Holodomor.
On November 22, the memorial Complex Svichka Pam’yati (Candle of Memory) was unveiled. The ceremony of unveiling was attended by President Yushchenko and his wife Kateryna Yushchenko, by the presidents of Latvia and Lithuania, by the representatives of diplomatic missions in Ukraine, by many Ukrainian officials, clergy, cultural and public figures, survivors of the Great Famine and other guests.
Addressing those gathered and television audiences across Ukraine, President Yushchenko said that it was not only Ukraine that suffered terribly during the soviet times — the soviet regime was responsible for the death of millions of people of other ethnic backgrounds, for mass deportations that killed hundreds of thousands, for summary executions of untold numbers of innocent people, for the suffering of millions in the GULAG, for invasions to suppress the rising democracies in Hungary and Czechoslovakia, and for other appalling crimes. “Speaking about these crimes, I also say that they must never be forgotten — and must never be repeated.”
President Yushchenko lit the symbolic Eternal Candle in the Hall of Memory of the Holodomor Memorial Complex. Millions of people lit their own candles at their homes across Ukraine at the same time.
This year, the symbolic Candle of Memory was passed from Ukrainian community to Ukrainian community across the globe in 33 countries of the world. Its journey began in Australia on April 1.
On the morning of November 22, a memorial service was held at the Cathedral of Holy Sophia in Kyiv. Later in the day, an international forum, Narod miy zavzhdy bude! (My People Will Live Forever!) was held at the Opera House in Kyiv. It was attended by the presidents of Ukraine, of Georgia, of Latvia, of Lithuania, of Poland, by representatives of other countries of the world, of the European Parliament, UNESCO, Parliamentary Assembly of the Council of Europe, by high-ranking officials and clergy, by public figures and other distinguished guests.
In his speech, President Yushchenko said, in part:
“…A recollection from Savyntsi village in the Kyiv region: ‘In Vasyl Tanchyk’s family, he and his wife died when their infant child was still alive and was holding tightly to its mother. This child was taken to the cemetery together with its parents and thrown alive into the common grave… the child was then covered up with earth...’
… Do we comprehend magnitude of this loss? Do we understand our responsibility?..
My words bear national pain.
My words bear the strength of a great nation.
This day unites millions. And death steps back.
We are alive. We are the state. We have overcome.
We have defeated evil…
After seventy-five years, the nation and state pay back our debts to our deceased brothers and sisters.
I am grateful to all my fellow countrymen and Ukrainians of the world, who have been seeking for, and restoring the truth about Holodomor. I call upon you not to stop this blessed work.
I am grateful to the heads of states, to parliaments and governments, to international organizations and to the public for brotherly solidarity with us…
At the peak of Holodomor, 25 thousand people died every day in our land.
Terror through famine in Ukraine was a well-planned act of genocide…
The territories of Ukraine and the Kuban, where Ukrainians were the majority, were cordoned off by military units…
… Stalin had only one aim: to subdue the peasantry, to exterminate Ukraine’s elites and to break the spine of Ukrainians, who were the second largest ethnic group in the soviet empire and potentially posed the biggest threat to it…
With brotherly respect and sympathy we bow our heads before all those who suffered from Stalin’s regime much as we did: before Russians, Belarusians, Kazakhs, Crimean Tatars, Moldavians, Jews, and dozens of other nations…
We call upon all, and first of all upon the Russian Federation, to condemn the crimes of Stalinism and of the totalitarian Soviet Union, together, as true brothers who are honest and pure before each other.
The culprit is the imperial, communist, soviet regime.
There are those who deny Holodomor today and justify Stalin’s actions as a ‘rational way of governing’ … We condemn even the slightest attempts to justify the butchers of our nation…
I ask forgiveness for all sins, conscious or not, that were committed during a thousand years of history…
Only through honesty with ourselves — no matter how painful it can be — only through our comprehension of belonging to all that is Ukrainian will we clear the path towards a new life and a new future…
I believe deeply and firmly that if today a candle of remembrance is lit in every window, we will understand the most important thing — our nation will live forever.”
President George W. Bush of the USA, and President-Elect Barrack Obama were among many heads of states and dignitaries who sent their greetings for the Holodomor observance, expressing their deepest condolences on this solemn occasion. They emphasized that everything should be done to prevent similar acts of cruelty from ever happening again.
The solemn events were overshadowed by fierce opposition from Russia. The Kremlin is resisting Ukraine’s campaign to win international recognition of the 1932–33 tragedy as an act of genocide against the Ukrainian nation, saying that other ethnic groups also suffered. President Medvedev of Russia, though invited, refused to come to attend the Holodomor observance, explaining his reasons not to attend in a letter sent to President Yushchenko. President Medvedev claimed that the issue of Holodomor was much too politicized, and that there were not enough grounds to call it genocide specifically directed against the Ukrainian people — “millions of people died in the Volga regions, southern Urals, western Siberia, Kazakhstan and Belarus. We do not condone the repressions of the Stalinist regime directed against the entire soviet people, but to say that the purpose was to destroy the Ukrainians means to go against the facts and to try to give the nationalistic context to the tragedy of all.”
At present, 13 states have officially recognized Holodomor as an act of genocide: Estonia, Australia, Canada, Hungary, Lithuania, Georgia, Poland, Peru, Paraguay, Ecuador, Columbia, Mexico and Latvia.
Many international organizations and states, though not yet recognizing Holodomor an act of genocide against the Ukrainian people, labeled the Great Famine of 1932–1933 “a crime against humanity.”
At the religious service commemorating the victims
of Holodomor. The Cathedral of Holy Sophia,
November 22 2008.
At the Memorial Complex Svichka Pamyati — Candle
of Remembrance during the Holodomor
observance on November 22 2008.
From left to right: a Holodomor survivor, President
Viktor Yushchenko of Ukraine, his wife Kateryna
Yushchenko, President Valdis Zatlers of Latvia, his
wife Lilita Zatlere, President Valdas Adamkus of
Lithuania, Ukraine’s Prime Minister Yuliya
Tymoshenko at the Holodomor Memorial Complex.
November 22 2008.
President Valdis Zatlers of Latvia with his wife at the
observance at the Holodomor Memorial Complex.
Lithuanian President Valdas Adamkus helps plant
trees at the Memorial Complex as part of
the Holodomor observance.
Presentation of the National Book of Memory at the
Ukrayinsky Dim Culture Center. November 18 2008.
Ukrainian poet Ivan Drach with one of the volumes
of the National Book of Memory. | <urn:uuid:05422e63-815b-4885-bbec-f1bacf2c788c> | CC-MAIN-2018-05 | http://www.wumag.kiev.ua/index2.php?param=pgs20084/36 | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887981.42/warc/CC-MAIN-20180119125144-20180119145144-00384.warc.gz | en | 0.963731 | 2,631 | 3.390625 | 3 | {
"raw_score": 2.9860973358154297,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | History |
Biologists have found an increase in the level of one of the components of pheromone anxiety in honey bees(Apis mellifera), patients with nozematosis. This disease is caused by relatives of fungi - microsporidi genus Nosema. Scientists believe that the high level of this substance can be more dangerous than the infection itself. The work is published in the journal Royal Society Open Science.
Many animals communicate with individuals of their species with the help of chemical compounds - pheromones. Particularly well-studied pheromones of insects, in which these substances perform a variety of functions: for example, sexual pheromones, attracting individuals of the opposite sex; pheromones of aggregation; alarm pheromones emitted at various threats.
Honeybees have two pheromones of anxiety. One of them is secreted by the mandibular glands and is a 2-heptanoneketone. It is believed to be emitted by watchdog bees to activate the colony's protective response when threatened, and furage bees use it to mark flowers that have already been visited. But there are studies that show that this compound performs a different function: with its help bees paralyze parasites such as ticks of the genus Varroa, and remove them from the hive. The second pheromone is secreted by Kozhevnikov's iron at the base of the sting and consists of several compounds. Bees emit this pheromone when the enemy stings or when they are killed, which encourages other individuals to attack.
Secretion of pheromone anxiety in insects is also possible in other dangers, such as infection. But not much is known about such cases. For example, it is known that kissing bedbugs(Triatoma infestans)when infected with the fungus Beauveria bassiana increases the level of propionic acid, which is part of the pheromone of anxiety. Researchers suggest that a large number of this compound causes bedbugs to observe a "social distance" that reduces the transmission of the pathogen. Honeybees in the intestinal parasites of the genus Nosema show various physiological changes, including in the secretion of pheromones in working bees and uteruses. However, noszematose has not yet recorded a change in the level of pheromone anxiety.
A team of biologists from the United States and Turkey led by Christopher Mayack of Sworthmore College studied 100 honeybee workers from 30 hives and found nozematosis in 18 bee families caused by N. ceranae. The scientists then used mass spectrometry to identify the chemical compounds in each hive. It turned out that all contaminated hives had a high content of unsaturated alcohol eikosenol (cis-11-Eikosen-1-ol), one of the main components of the pheromone of anxiety. The effect of this substance depends on the situation. Bees secrete eicosenol as part of the pheromone of anxiety when they are attacked, which stimulates aggression or avoidance. But bees also use this compound to attract other individuals to feeding places.
The authors of the paper believe that elevated eicosenol production at the colony level can lead to a significant change in behavior - and it will be more harmful to the family than the infection itself. Previous studies have shown that healthy bees can kill individuals with nozematosis. But if the hive is completely infected, then individuals with a small degree of infection avoid highly infected. The authors suggest that eicosenol plays a major role in both types of interaction. However, the researchers say that infected bees may be less sensitive to pheromones, so they secrete in greater numbers. Scientists have concluded that further research is needed on the pheromone anxiety of infected bees to determine the exact role of eicosenol.
Nozematosis is a dangerous disease that can cause the bee family to die. However, bees have learned to resist pathogens: scientists have found that the seminal fluid of drones contains substances that negatively affect the life of Nosema apis. | <urn:uuid:f1275ccc-5738-423b-ade8-1b5f06470f35> | CC-MAIN-2021-21 | https://www.fyberus.com/post/infected-bees-found-elevated-levels-of-pheromone-anxiety | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988927.95/warc/CC-MAIN-20210508211857-20210509001857-00603.warc.gz | en | 0.945595 | 852 | 3.46875 | 3 | {
"raw_score": 3.0595645904541016,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
TIPS: Communicating During Disasters
When disaster strikes, you want to be able to communicate by
both receiving and distributing information to others. You may
need to call UHDPS at 713-743-3333 or dial 9-1-1 for emergency
assistance, locate friends or family, or let loved ones know
that you are okay. During disasters, communications networks
could be damaged, lose power, or become congested. This fact
sheet provides two important sets of tips. The first will help
you prepare yourself on campus and your mobile devices for a
disaster. The second may help you communicate more effectively
during and immediately after a disaster.
BEFORE A DISASTER: HOW TO PREPARE YOUR HOME AND MOBILE DEVICE
1. Maintain a list of emergency phone numbers in your cell phone
and in or near your campus dorm phone.
2. Keep charged batteries and car phone chargers available for
back-up power for your cell phone.
3. If you have a traditional landline (non-broadband or VOIP)
phone, keep at least one non-cordless phone in your home because
it will work even if you lose power.
4. Prepare a family and friend contact sheet. This should
include at least one out-of-town contact that may be better able
to reach family members in an emergency.
5. Program "In Case of Emergency" (ICE) contacts into your cell
phone so emergency personnel can contact those people for you if
you are unable to use your phone. Let your ICE contacts know
that they are programmed into your phone and inform them of any
medical issues or other special needs you may have.
6. If you do not have a cell phone, keep a prepaid phone card to
use if needed during or after a disaster.
7. Have a battery-powered radio or television available (with
8. Subscribe to text alert services from UH (Go to
www.uhemergency.info) or local and state government
to receive alerts in the event of a disaster.
DURING AND AFTER A DISASTER: HOW TO REACH FRIENDS, LOVED ONES
& EMERGENCY SERVICES
1. If you have a life-threatening emergency, call 9-1-1.
Remember that you cannot currently text 9-1-1.
2. For non-emergency communications, use text messaging, e-mail,
or social media instead of making voice calls on your cell phone
to avoid tying up voice networks. Data-based services like texts
and emails are less likely to experience network congestion. You
can also use social media to post your status to let family and
friends know you are okay. In addition to Facebook and Twitter,
you can use resources such as the
American Red Cross's Safe and Well program.
3. Keep all phone calls brief. If you need to use a phone, try
to convey only vital information to emergency personnel and/or
4. If you are unsuccessful in completing a call using your cell
phone, wait ten seconds before redialing to help reduce network
5. Conserve your cell phone battery by reducing the brightness
of your screen, placing your phone in airplane mode, and closing
apps you are not using that draw power.
6. If you lose power, you can charge your cell phone in your
car. Just be sure your car is in a well-ventilated place (remove
it from the garage) and do not go to your car until any danger
has passed. You can also listen to your car radio for important
7. Tune into broadcast television and radio for important news
alerts. If applicable, be sure that you know how to activate the
closed captioning or video description on your television.
8. If you do not have a hands-free device in your car, stop
driving or pull over to the side of the road before making a
call. Do not text on a cell phone, talk, or "tweet" without a
hands-free device while driving.
9. Immediately following a disaster, resist using your mobile
device to watch streaming videos, download music or videos, or
play video games, all of which can add to network congestion.
Limiting use of these services can help potentially life-saving
emergency calls get through to 9-1-1.
www.ready.gov regularly to find other helpful tips
for preparing for disasters and other emergencies.
11. Consumers with questions about their particular mobile phone
devices should contact their wireless provider or equipment
Sources of information | <urn:uuid:64029432-bd72-4139-b7a2-e5a7a77dfe9b> | CC-MAIN-2016-50 | http://www.uh.edu/af/news/January2012/ps1.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542938.92/warc/CC-MAIN-20161202170902-00189-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.903425 | 975 | 3.09375 | 3 | {
"raw_score": 1.7561506032943726,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Software |
Ian Houvet, Director, Boxfusion
A good education is a critical foundation for the success of any society. As we move into the Digital Age, with jobs as yet unimagined, the revision of our syllabus is becoming increasingly important in preparing our children for the future.
Digitising the syllabus
It would seem that government is reviewing the relevance of the curriculum to incorporate digital skills. To this end, President Ramaphosa announced during his SONA address in mid-February that coding and robotics has already been introduced at Grade R to Grade 3 level across 200 schools, to be fully implemented across the country by 2022.
Coding is a language in itself, making the learning of coding a literacy skill that will be essential for ensuring employability in the future. In Africa in particular, it is critical to raise a generation of children who can solve local problems through locally created and coded solutions.
Not only is coding a literacy skill, but it imparts problem-solving skills to children. This helps them to leverage maths and logic to come up with solutions, not only to coding problems, but to real life scenarios as well. Teaching coding imparts of a mindset of trial and error in children that also has the potential to produce resilience later in life.
Furthermore, coding is a space that allows children to become creative. When coding, children are required to use logical and computational thinking to solve problems, but they can also create solutions and games for themselves from scratch. This gives them a space to flex their imaginations.
Distribution of tablets
Besides upgrading the curriculum, providing children with the tools they need to learn with is a second component to ensuring they’re ready for the workplace of the future. In his 2019 address, President Ramaphosa had announced that “Over the next six years, we will provide every school child in South Africa with digital workbooks and textbooks on a tablet device.”
Beginning with historically disadvantaged schools and distributing digital textbooks across the country will allow children to become increasingly familiar with devices that they may not have access to at home and provides them with a platform to learn coding skills on a tablet device.
Going beyond primary education
A final prong to digitising the current curriculum is the creation of more colleges around the country. To that end, President Ramaphosa announced the founding of the Science and Innovation University in the City of Ekurhuleni.
According to the President, “This will enable young people in that metro to be trained in high-impact and cutting-edge technological innovation for current and future industries.”
This initiative will serve to address the country’s ICT skills shortage, as well as go some ways to improve youth unemployment by providing graduates with the abilities they will require to participate in the 4IR.
Giving graduates a foot in the door through practical workplace experience is an essential component of this education equation. At Boxfusion, we’re participating in the development of ICT skills through our annual graduate programme, which pulls students directly from universities and places them within our company to ensure that they develop real-world skills and increase their knowledge of coding and software development.
It is initiatives and public-private partnerships like these which have the potential to secure a future for our youth in the 4IR. | <urn:uuid:d60c4534-d6b6-42cb-a5dc-e6d31f66b3f2> | CC-MAIN-2022-27 | http://boxfusion.co.za/2020/03/10/increasing-digital-education-in-south-africa/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037089.4/warc/CC-MAIN-20220626040948-20220626070948-00180.warc.gz | en | 0.960048 | 676 | 3.03125 | 3 | {
"raw_score": 2.8483147621154785,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Education & Jobs |
The following excerpt is taken from Chapter 12 of Kevin Laland’s new book Darwin’s Unfinished Symphony. How Culture Made the Human Mind. (2017, Princeton University Press).
We have all experienced technological innovation during our lifetimes and, depending on our age, can remember the first appearance of iPods in 2001, the World Wide Web in the 1990s, mobile phones in the 1970s, or color TV in the 1960s. Each of these influential, recent innovations swept society as the cutting-edge advance of the day, only to be refined, elaborated, and improved upon by succeeding technology. The logic of cultural evolution is identical to that of biological evolution, even if the details differ. New ideas, behaviors, or products are devised through diverse creative processes; these differ in their attractiveness, appeal, or utility, and as a result are differentially adopted, with newfangled variants superseding the obsolete. Technology advances and diversifies by refining existing technology, which in turn has bolted the innovations of earlier times onto their predecessor’s standard. Through endless waves of innovation and copying, cultures change over time. The logic applies broadly, from the simplest of manufactured products like pins and paper clips, to the dazzling complexity of space stations and CRISPR gene editing, and even back through time to the stone knapping of our hominin ancestors and the creations of animal innovators from the distant past. Technological evolution is relentless for exactly the same reason that biological evolution is; where there is diversity, including diversity in functional utility and inheritance, then natural selection inevitably occurs.
Curiously, while the evolution of technology is apparent to many, the evolution of the arts is less widely accepted. Yet the production of artistic works, and the manner in which art changes over time, owes a substantive debt to imitation that goes far beyond the copying of styles, techniques, and materials. The film and theatre industries illustrate what architecture, painting, and sculpture affirm. That is, in the absence of a mind fine-tuned by natural selection for optimal social learning, art simply could not be produced.
Having examined the little appreciated dues that the art world owes to our biological heritage, we will go on to consider the evolution of dance. The history of dance is particularly well documented, and provides a wonderful case study with which to illustrate how human culture evolves. We will see that cultural evolution is neither linear (constantly progressing from simple to more complex over time), as envisaged by nineteenth-century anthropologists, nor treelike with independent lineages constantly branching, as Darwin portrayed biological evolution. Cultural evolution is more of a melting pot, with innovation often the product of borrowing from other domains, such that cultural lineages come together as well as diverge. This can be seen in the richly cross-fertilizing coevolution of dance, music, fashion, art, and technology, whose histories are intimately entwined.
We will begin with the movies. In the critically acclaimed film The Imitation Game, Benedict Cumberbatch received plaudits for his brilliant portrayal of Alan Turing, the eccentric genius who cracked the cyphers of the Enigma machine, used by the Nazis to send secure wireless messages during the Second World War; in so doing, Turing devised the world’s first computer. Turing’s machine endeavored to imitate the human mind and perhaps a particular mind; this was possibly the mind of his childhood friend and first love, Christopher Morcom, after whom he named his electromechanical code breaker. Turing was awarded the OBE by King George VI for his Bletchley Park services, which were estimated to have shortened the war by two to four years. But Turing’s life ended in tragedy. Prosecuted for homosexual acts, still a criminal offense in Britain in 1952, he endured two years of aggressive hormone “treatment” before committing suicide by eating an apple laced with cyanide shortly before his 42nd birthday. Only in 2013 did Queen Elizabeth II grant him a posthumous pardon and British Prime Minister Gordon Brown apologize for the appalling treatment that this brilliant scientist and war hero suffered at the hands of his nation.
Turing is widely regarded as the father of modern computing. According to artificial intelligence legend Marvin Minsky of MIT, Turing’s landmark paper of 1937 “contains, in essence, the invention of the modern computer and some of the programming techniques that accompanied it.” The metaphor of the mind has inspired artificial intelligence research for half a century, fueling countless advances in computing technology. As far back as 1996, human conceit took a humiliating hit when a supercomputer called Deep Blue defeated Garry Kasparov, perhaps the greatest-ever chess grandmaster, to exert the superiority of the mechanical over the organic mind. The world’s most powerful computer today, the Tianhe-2 supercomputer at China’s National University of Defense Technology, is the latest in a long line of refined imitations of pre-existing technology that can be traced all of the way back to Hut 8 at Bletchley Park. One day soon, quantum computers are expected to supplant today’s digital computers. Already the world’s most accurate clock is the Quantum Logic Clock, produced by the National Institute of Standards and Technology (NIST), which uses the vibrations of a single aluminum atom to record time so accurately that it would neither gain nor lose as much as a second in a billion years. Yet, in homage to their humble beginnings, such technologies are known as “quantum Turing machines.”
We readily recognize the role that imitation plays in technological evolution, just as we easily comprehend Turing’s attempt to imitate the mind’s computational power with a thinking machine. What is often overlooked is that, as every actor and actress earns a living by imitating the individuals portrayed, all movies are in the imitation game. The entire film industry relies on the ability of talented thespians to study their focal character’s behavior, speech, and mannerisms in meticulous detail and to duplicate these with sufficient precision to render their portrayal a compelling likeness, and leave the storyline credible. Cumberbatch convinces us that he really is Alan Turing, just as Marlon Brando persuaded us that he was Vito Corleone in The Godfather, or Meryl Streep is the quintessential Margaret Thatcher in The Iron Lady. The magic of the movies would dissipate instantly if this pretense ever broke down. Academy awards and Golden Globes are the ultimate recognition handed out to honor the world’s most gifted imitators. Tens of millions of years of selection for more and more accurate social learning has reached its pinnacle in the modern world’s Brandos and Streeps. Yet such extraordinary acting talent was clearly not directly favored by natural selection. No amateur dramatic productions were performed in the Pleistocene, and being a proficient actor did not bring reproductive benefits to early hominins. Acting is not an adaptation, but rather an “exaptation,”—that is, a trait originally fashioned by natural selection for an entirely different role. Acting proficiency is a byproduct of selection for imitation.
Among our distant ancestors, those individuals who were effective copiers did indeed enjoy fitness benefits, but their copying was expressed in learning challenging life skills, not the performing arts. We are all descended from a long line of inveterate imitators. By copying, our forebears learned how to make digging tools, spears, harpoons, and fish hooks; make drills, borers, throwing sticks, and needles; butcher carcasses and extract meat; build a fire and keep it going; pound, grind, and soak plant materials; hunt antelope, trap game, and catch fish; cook turtles, and make tools from their shells; mount a collective defense against ferocious carnivores; as well as what each sign, sound, and gesture observed in their society meant. These, and hundreds of others skills, were what shaped the polished, imitative capabilities of our lineage. Acquiring such proficiencies would have been a matter of life and death to the puny and defenseless members of our genus in their grim struggle to forge a living on the plains of Africa, the deserts of the Levant, or the Mediterranean coast.
Hundreds of thousands, perhaps millions, of years of selection for competent imitation has shaped the human brain, leaving it supremely adapted to translate visual information about the movements of others’ bodies into matching action from their own muscles, tendons, and joints. Now, eons later, we effortlessly direct this aptitude to fulfill goals utterly inconceivable to our forebears, with little reflection on what an extraordinary adaptation the ability to imitate represents. Imitation is no trivial matter. Few other animals are capable of motor imitation, and even those that do exhibit this form of learning cannot imitate with anything like the accuracy and precision of our species. For over a century psychologists have struggled to understand how imitation is possible. Most learning occurs when individuals receive “rewards” or “punishments” for their actions, like achieving a desired goal or else experiencing pain. This reinforcement encourages us to repeat actions that brought us pleasure and to avoid those activities that brought pain or stress, a process known as operant conditioning. The reward systems that elicit positive or negative sensations are ancient structures in the brain, fashioned by selection to train animals’ behavior to meet adaptive goals. However, when we learn to eat with chopsticks or to ride a bicycle by observing another individual, we have seemingly not received any direct reinforcement ourselves, so how do we do it? Even more challenging to understand, how do we connect the sight of someone else manipulating chopsticks, or peddling a bike, with the utterly different sensory experience that we encounter when we do these things? This correspondence problem has been the bugbear of imitation researchers for decades. Even today, there is little consensus as to how this is done. One conclusion is clear, however. Solving the correspondence problem requires links, in the form of a network of neurons, between the sensory and motor regions of the brain. Years ago, when a postdoctoral fellow at Cambridge University, working with eminent ethologist Patrick Bateson, I explored the evolution and development of imitation using artificial neural network models. We found that we could simulate imitation and other forms of social learning, provided we pretrained the artificial neural network with relevant prior experience that allowed it to create such links between perceptual inputs and motor outputs. Interestingly, our neural networks that simulated imitation possessed exactly the same properties as “mirror neurons.”
Mirror neurons are cells in the brain that fire both when an individual performs an action, and when the individual sees the same action performed by others. Mirror neurons are widely thought to facilitate imitation. As the brain expanded during human evolution, those regions now known to be involved in imitation, such as the temporal and parietal lobe, grew disproportionately larger. The parietal lobe is that precise region of the primate brain in which mirror neurons were first detected in monkeys, and brain-imaging studies show that the same regions of the human brain indeed possess these mirroring properties. Plausibly, the mirror neuron system was the direct product of selection for enhanced imitation among our ancestors. These cognitive abilities continue to allow us to learn new skills today—for instance, how to drive, wield a hammer, or cook a meal. However, those same cognitive abilities are also what permits Jimmy Stewart to impress us every Christmas as George Bailey in It’s a Wonderful Life.
Also overlooked, but far less obvious, is how reliant film, theatre, opera, and even computer games are on the audiences’ abilities to imagine themselves part of the action, to experience the fear and the tension, and to share the main character’s emotions vicariously. These capabilities were also likely fashioned in the sweaty heat of the African jungle, where the ability to take the perspective and understand the goals and intentions of those occupied in important tasks helped the observer to acquire the relevant technology. Here too, the ancestral sharing of emotions in social settings, such as responding with anxiety to the fear of another or drawing joy from the laughter of a child, helped to shape the empathy and emotional contagion that makes the movie a heartfelt experience. These sensitivities are also reliant on forms of social learning with adaptive functions, such as helping individuals learn the identity of predators or circumvent other dangers. In the absence of these social learning abilities, we would all watch movies like sociopaths, utterly indifferent to the lead character’s trauma, equally unmoved by the Psycho shower scene or Rhett Butler and Scarlett O’Hara’s kiss. Global box office revenue was estimated at $40 billion in 2015. Without the human ability to imitate there would be no movie industry; and for that matter, no theatre or opera.
In fact, when we start to think about it, connections emerge between a surprising number of the arts and the imitative and innovative capabilities that drove the evolution of the human brain. Consider, for instance, sculpture. In order to complete his statue of David in 1504, Michelangelo had to solve a correspondence problem of his own. Rather than moving his own body to match David’s posture, Michelangelo had to move his hands and arms, skillfully wielding hammer and chisel to transform a block of marble into an exact replica. To do this, Michelangelo had to translate the visual inputs corresponding to the sight of the male model into motor outputs that generated a matching form in the stone. That he exceled in this challenge, and produced one of the greatest masterpieces of the Renaissance, is testament not just to his talent but also to many years of practice in stonework. Michelangelo began his artistic training at the age of 13, and spent time as a quarryman in Carrara, where he learned to brandish a hammer to good effect. Those years of experience functioned to train the neural circuitry of his brain (just as we trained our artificial neural networks) to be sensitive to the correspondences between the movements associated with his masonry and the physical results in the stone. That training, however, could only be effective because Michelangelo possessed a brain uniquely designed to generate rich cross-modal mappings between the sensory and motor cortex when given the right experience; this was a legacy of ancestral selection for imitative abilities.
Admiring marvelous sculpture like David or the Venus de Milo can be a startlingly sensual experience, especially when one considers that we are confronted with, in essence, a block of stone. One is often left secretly wanting to reach out and touch the beautiful forms. Some cultures, such as the Inuit, even make small sculptures that are solely meant to be handled, rather than seen. That we should experience such sensations again draws on those cross-modal neural networks. These connect physical representations of objects in our minds to the objects themselves, and from there to a pre-existing network of associations and, often intimate, memories.
Only a very large-brained species could ever have produced works of sculpture fashioned with such precision. Such works require meticulous and controlled hand movements, manual dexterity that evolved along with increased brain size. Mammalian brains changed in internal organization as they got larger, inevitably becoming more modular and asymmetrical with size, as described in chapter 6. With increasing overall size, larger brain regions typically become better connected to other regions and start to exert control over the rest of the brain. This occurs because neurons vie with each other to connect to target regions and this competition is generally won by those neurons that collectively fire the target cells, giving large brain regions an advantage. The net result is an increase in the ability of the larger brain regions to influence other regions. The dominant structure in the human brain is the neocortex, which accounts for approximately 80% of the brain by volume, more than in any other animal. In the primate lineage to humans, the neocortex (the thinking, learning, and planning part of the brain) has become larger over evolutionary time, and has exerted increasing control over the motor neurons of the spinal cord and brain stem; this has led to increased manual dexterity and more precise control of the limbs. The cerebellum, the second largest region of the human brain, also plays an important role in motor control and has enlarged during recent human evolution as well. This motor control is what makes humans exceptional at finely coordinated movements. If I am correct, and innovation and social learning have driven the neocortex and cerebellum to become larger over evolutionary time, then this natural selection may simultaneously have generated human greater dexterity, which could be expressed not just in painting and sculpture, but also acting, opera, and in particular, dance.
The motor control that allows humans to produce artistic works and performances spontaneously is a capability that no other animal shares. Granted, the internet is awash with reports and YouTube footage of artistic animals, but these have not stood up to close scrutiny from animal behavior experts. You may well be able to buy painting kits for your cat or dog, and your pet may well enjoy the experience, but little that is genuinely artistic is produced. Like most other animals that have been handed a paintbrush, dogs and cats lack both the inclination and motor control to produce representational art, and I strongly suspect that any abstract beauty observed in the colorful product is strictly in the eye of the pet owner.
Intriguingly, the Humane Society of the United States recently organized a Chimpanzee Art Contest, to which six chimpanzees submitted “masterpieces.” The winner, Brent, a 37-year-old male from ChimpHaven in Louisiana, received a $10,000 prize from the stately hands of Jane Goodall. Brent, apparently, produced the work with his tongue, rather than bothering to use a paintbrush. The original works were then auctioned off on eBay with the many thousands of dollars raised going to support primate sanctuaries. Yet, however much one admires this charming, clever, and well-motivated funding initiative, the claim that the chimpanzees concerned are artists, in any meaningful sense, is greeted with skepticism by animal behaviorists and art scholars alike. A generous reading of the artistic pretensions of these animals would at best acknowledge some pleasure in generating colorful compositions.
Elephants are considerably more interesting because to the astonishment of thousands of gullible tourists, they regularly produce realistic paintings of trees, flowers, or even other elephants in sensational public performances at several sanctuaries in Thailand (figure 11). The artwork, which the elephants sometimes even sign with their name, sells in droves. However, all is not as it seems. Each paintbrush is placed in the elephant’s trunk by its trainer, who then surreptitiously guides the trunk movements by gently tugging at its ears. The elephant has been trained to hold the brush to the paper and move it in the direction to which its ear is being pulled. At the very least, one has to acknowledge an impressive piece of animal training, and one cannot help but admire the precision and control that the painting elephants exhibit with their trunks. Yet, a trick has taken place, and the trainer gets away with it by cleverly positioning himself behind the elephant. The tourists nonetheless typically go home happy, even those who spot the ruse, since no one can say that their “priceless” artwork was not painted by an elephant!
Figure 11. Painting elephants are becoming a major tourist attraction in Thailand. The elephants regularly produce realistic paintings of trees, flowers and other elephants in impressive public performances. However, all is not as it seems, and the tourists are being hoodwinked. Copyright Philippe Huguen/AFP/Getty Images.
Representational art is a uniquely human domain. That elephants can, with guidance, produce these pictures is nevertheless fascinating, precisely because it demonstrates that with training, they too are capable of building up cross-modal neural networks in their brains that translate tactile sensory inputs into matching motor outputs. The painting elephants have solved a correspondence problem of their own. It may be no coincidence that an Asian elephant from South Korea called Koshik was recently shown to be capable of vocal imitation, including mimicking human speech, while Happy, another Asian elephant at the Bronx Zoo in New York, was shown to be able to recognize herself in a mirror. Almost certainly, these capabilities are related. Like sculpture, producing paintings (and mirror self-recognition) makes demands on the circuitry of the brain involved in imitation.
Our big brains not only afford precise control of our hands, arms, legs, and feet, but also of our mouth, tongue, and vocal chords, which is what endowed our species with the vocal dexterity to speak and sing. Without that cortical expansion, members of our species could neither have fashioned a work of art, nor vocally expressed their admiration for it. The evolution of language is surely central to the origins of art, since art is rife with symbolism. As described in chapter 8, symbolic and abstract thinking are widely regarded as foremost features of human cognition. The use of arbitrary symbols allows humans to represent and communicate a wide range of ideas and concepts through diverse mediums. We possess minds fashioned by natural selection to manipulate symbols and think abstractly through spoken language, but we also express this penchant for symbolism in numerous artistic endeavors.
Architecture is one such domain. Victor Hugo’s 1831 masterpiece, Notre Dame de Paris, contains an extraordinary chapter entitled, “This Will Destroy That”; it echoes the enigmatic words of the evil Archdeacon Frollo, who rants against the invention of the printing press. Frollo expresses the terror of the church in the face of a rising new power—printing—that threatens to supplant it. The concern was not just that people might start to rely on books rather than priests to acquire their knowledge and advice, but also that the cathedral’s magnificent gothic architecture, already in disrepair, would lose its power and symbolism:
It was a premonition that human thought … was about to change its outward mode of expression; that the dominant idea of each generation would, in future, be embodied in a new material, a new fashion; that the book of stone, so solid and so enduring, was to give way to the book of paper.
To the modern reader such fears appear irrational. Yet, in the preliterate world, powerful institutions literally wrote their authority in stone. From the Pyramids to St. Peter’s Basilica in Rome or the Palace of Versailles, the magnificence, scale, wealth, and beauty percolated with the symbolism of God-given command and assuredness.
Human artwork has a long history, dating back some 100,000 years. It exhibits all the hallmarks of cultural evolution. While painting manifests multiple divergent styles, one ancient conceptual lineage sets out to represent the visual experience with accuracy. Consider, for instance, René Magritte’s famous painting The Treachery of Images, which shows a pipe that looks as though it is a model for a tobacco advertisement. Much to the puzzlement of millions of admirers, Magritte painted below the pipe, “Ceci n’est pas une pipe,” which is translated as “This is not a pipe.” At first sight, this appears completely untrue. What we momentarily forget, of course, is that the painting is not a pipe, but an image of a pipe. When Magritte was once asked to explain this picture, he apparently replied that of course it was not a pipe; just try to fill it with tobacco! Magritte’s point might appear trite to some, privileged as we are to live in an age where we can overdose on magnificent artworks that perfectly capture perspective and exhibit astonishing accuracy of portrayal. In the contemporary artistic movement of hyperrealism, the pictures of artists like Diego Fazio, Jason de Graf, or Morgan Davidson use acrylics, pencil, or crayons with such astonishing accuracy that they are almost always mistaken for photographs. Their work can be placed in a long-standing tradition that sets out to produce precise, detailed, and accurate representation of the actual visual appearance of scenes and objects. This movement flourished at various periods, and has been known as “realism,” “naturalism,” or (with appropriate reference to imitation) “mimesis.” Such hyperreal works allow the viewer to escape the correspondence problem by producing an image that exactly mimics what it represents. However, there can be no such escape for the artist, who must overcome this challenge in order to succeed.
Nowhere in the arts is the correspondence problem more clearly manifest than in dance, which again harnesses those same cognitive faculties that are necessary to integrate distinct sensory inputs and outputs. Following an excited conversation in a Cambridge pub in 2014, I recently began a collaboration with Nicky Clayton and Clive Wilkins to study the evolution of dance. Nicky is a professor of psychology at Cambridge University and expert of animal cognition; she is also a passionate dancer, and she merges this with her research as scientific director to the Rambert, a leading contemporary dance company. Clive is equally impressive as a successful painter, writer, magician, and also a dance enthusiast. We rapidly converged on the hypothesis that dancing may only be possible because its performance exploits the neural circuitry employed in imitation.
Dancing requires the performer to match their actions to music, or to time their movements to fit the rhythm, which can sometimes even be an internal rhythm, such as the heartbeat. This demands a correspondence between the auditory inputs the dancer hears and the motor outputs they produce. Likewise, competent couple or group dancing requires individuals to coordinate their actions, and in the process match, reverse, or complement each other. This too calls for a correspondence between visual inputs and motor outputs. That humanity is able to solve these challenges, albeit with varying degrees of ease and grace, is a testament to the neural apparatus that we uniquely possess as a legacy of selection for imitative proficiency. The same reasoning applies when individuals dance alone.
Contemporary theories suggest that while the potential for imitation is inborn in humans, competence is only realized with appropriate lifetime experience. Early experiences, such as being rocked and sung to as a baby, help infants to form neural connections that link sound, movement, and rhythm, while numerous experiences later in life, such as playing a musical instrument, strengthen these networks. The suggestion that taking up the piano will make you a better dancer might seem curious, but that is a logical conclusion to draw from the neuropsychological data.
The relentless motivation to copy the actions of parents and older siblings that is apparent in young children may initially serve a social function, such as to strengthen social bonds. However, childhood imitation also trains the “mirroring” neural circuitry of the mind, leaving the child better placed later in life to integrate across sensory modalities. Theoretical work suggests that the experience of synchronous action forges links between the perception of self and others performing the same movements. Whether because past natural selection has tuned human brains specifically for imitation, or because humans construct developmental environments that promote imitative proficiency—or both—there can be no doubt that, compared to other animals, humans are exceptional imitators. A recent brain-scanning analysis of the neural basis of dance found that foot movement timed to music excited regions of the brain previously associated with imitation, and this may be no coincidence. Dancing inherently seems to require a brain capable of solving the correspondence problem.
Comparative evidence is remarkably consistent with this hypothesis. A number of animals have also been characterized as dancers, including snakes, bees, birds, bears, elephants, and chimpanzees; the last of these perform a “rain dance” during thunderstorms, which has a rhythmic, swaying motion. However, whether animals can truly be said to dance remains a contentious issue, which depends at least in part on how dance is defined. In contrast, the more specific question of whether animals can move their bodies in time to music or rhythm has been extensively investigated, with clear and positive conclusions. Strikingly, virtually all animals that pass this test are known to be highly proficient imitators, frequently in both vocal and motor domains.
This ability to move in rhythmic synchrony with a musical beat by nodding our head or tapping our feet, for instance, is a universal characteristic of humans, but is rarely observed in other species. The most prominent explanation for why this should be, known as the “vocal learning and rhythmic synchronization” hypothesis, is broadly in accord with the arguments presented here. This hypothesis suggests that moving in time to the rhythm (known as “entrainment”) relies on the neural circuitry for complex vocal learning; it is an ability that requires a tight link between auditory and motor circuits in the brain. The hypothesis predicts that only species of animal capable of vocal imitation, such as humans, parrots and songbirds, cetaceans, and pinnipeds, but not nonhuman primates and not those birds that do not learn their songs, will be capable of synchronizing movements to music.
The many videos of birds, mostly parrots, moving to music on the internet are consistent with the hypothesis, but compelling footage of other animals doing the same is comparatively rare. Some of these “dancing” birds have acquired celebrity status; the best known is Snowball, a sulphur-crested cockatoo, whose performances on YouTube have “gone viral.” Snowball can be seen to move with astonishing rhythmicity, head banging and kicking his feet in perfect time to Queen’s “Another One Bites The Dust” or the Backstreet Boys (figure 12). Home videos can be faked, and parrots also have the ability to mimic human movements, so the footage alone cannot show that Snowball is keeping time to music directly. For this reason, a team of researchers led by Aniruddh Patel at The Neurosciences Institute in San Diego brought Snowball into the laboratory to carry out careful experiments. Manipulating the tempo of a musical excerpt across a wide range, the researchers conclusively demonstrated that Snowball spontaneously adjusts the tempo of his movements to stay synchronized with the beat.
Figure 12. Snowball, a sulphur-crested cockatoo, performs dances on YouTube that have thrilled millions. Careful experiments have demonstrated that Snowball adjusts his movements to keep time to the music. By permission of Irena Schulz.
Thus far, evidence for spontaneous motor entrainment to music has been reported in at least nine species of birds including several types of parrot, and the Asian elephant, all of whom are vocal imitators, and several of which show motor imitation. Entrainment has also been shown in a chimpanzee, a renowned motor imitator. The sole exception to this association is the California sea lion, which is not known to exhibit vocal learning. However, the fact that related species show vocal learning, including several seals and the walrus, raises the possibility that this capability or a relevant precursor may yet be demonstrated. Lyrebirds have not been subject to entrainment experiments, but males are famous for their ability to imitate virtually any sounds, including dog barks, chainsaws, and car alarms. They can match subsets of songs from their extensive vocal repertoire with tail, wing, and leg movements to devise their own “dance” choreography. Clearly, there is more to dance, at least social or collective dance, than entrainment to music. There must also be coordination with others’ movements, which would seemingly draw on the neural circuitry that underlies motor, rather than vocal, imitation. However, a recent analysis of the avian brain suggests that vocal learning evolved through exploitation of pre-existing motor pathways, implying that vocal and motor imitation are reliant on similar circuitry. The animal data provide compelling support for a causal link between the capabilities for imitation and dance. Whether this is because imitation is necessary for entrainment, or merely facilitates it through reinforcing relevant neural circuitry, remains to be established.
Dance often tells a story, and this representational quality provides another link with imitation. For instance, in the “astronomical dances” of ancient Egypt, priests and priestesses accompanied by harps and pipes mimed significant events in the story of a god, or imitated cosmic patterns such as the rhythm of night and day. Through dance, Australian Aborigines depict the spirits and ideas associated with every aspect of the natural and unseen world. There are animal dances for women, which are thought to function like love potions or fertility treatments to make a lover return, or to induce pregnancy, while male dances are more often about fishing, hunting, and fighting. Africa, Asia, Australasia, and Europe all possess long-standing traditions for mask-culture dances, in which performers assume the role of the character associated with the mask and, often garbed in extravagant costumes, enact religious stories. Native Americans are famed for their war dances, which were thought so powerful and evocative they were banned by the United States government—the law was not repealed until 1934. A variety of animal dances are also performed by Native Americans, and include the buffalo dance, which was thought to lure buffalo herds close to the village, and the eagle dance, which is a tribute to these venerated birds. This tradition continues right through to the present. For instance, in 2009, the Rambert Dance Company marked the bicentenary of Darwin’s birth by collaborating with Nicky Clayton to produce The Comedy of Change, which evoked animal behavior on stage with spellbinding accuracy (figure 13). In all such instances, the creation and performance of the dance requires an ability on the part of the dancer to imitate the movements and sounds of particular people, animals, or events. This reproduction contributes importantly to the meaning of the dance in the community, and imparts a bonding or shared experience. Such dances reintroduce the correspondence problem, since the dancer, choreographer, and audience must be able to connect the dancers’ movements to the represented target phenomenon.
Figure 12. Dancers from the Rambert Dance Company in The Comedy of Change. Several lines of evidence connect the ability to dance with imitation. By permission of Hugo Glendinning.
The most transparent connection between dance and imitation, however, will be readily apparent to just about anyone who has ever taken or observed a dance lesson; that is, dance sequences are typically learned through imitation. From beginner ballet classes for infants to professional dance companies, the learning of a dance routine invariably begins with a demonstration of the steps from an instructor or choreographer, which the dancers then set out to imitate. It is no coincidence that dance rehearsal studios around the world almost always have large mirrors along one wall. These allow the learner to flit rapidly between observing the movements of the instructor or choreographer and observing their own performance. This not only allows them to see the correspondence, or lack of correspondence, between the target behavior and what they are doing, but also allows the dancers to connect feedback from their muscles and joints to visual feedback on their performance, allowing error correction and accelerating the learning process.
Prospective new members of professional dance companies are given challenging auditions in which they are evaluated partly on their ability to pick up new dance routines with alacrity, an essential skill for a dancer. Dancing is not just about body control, grace, and power, but it also demands its own kind of intelligence. A key element in whether or not a dancer makes the grade essentially comes down to how good they are at imitating. A professional dancer at the Rambert once told Nicky and me that she had recently taken up sailing, and her instructor was flabbergasted at how quickly she had picked up the techniques involved. What the instructor had failed to appreciate was that dancers earn their living by imitation.
Imitation is not the only cognitive faculty that is necessary for learning dance. Also important is sequence learning, particularly in choreographed dances, which require the learning of a long, and often complex, sequence of actions. Even improvised dances such as the Argentine tango require the leader to plan a sequence of movements that provide the basis for the exquisite conversation between leader and follower. As we have learned, long strings of actions are very difficult to learn asocially, but social learning substantially increases the chances that individuals will acquire the appropriate sequence. Our ancestors were predisposed to be highly competent sequence learners because many of their tool-manufacturing and tool-using skills, as well as food-processing techniques, required them to carry out precise sequences of actions, with each step in the right order. The fact that these sequence-learning capabilities are clearly exploited in dance provides further evidence of the extent of the surprising connection between imitation and dance.
Culture evolves in two senses: the observation that cultural phenomena change over time, and the evolution of the capacity for culture. Evolutionary biology can shed light on these issues by helping to explain how the psychological, neurological, and physiological attributes necessary for culture came into existence. In the case of dance, evolutionary insights explain how humans are capable of moving in time to music; how we are able to synchronize our actions with others or move in a complementary way; how we can learn long, complex sequences of movements; why it is that we have such precise control of our limbs; why we want to dance what others are dancing; and why both participating in dancing and watching dance is fun. Armed with this knowledge, we can make better sense of why dance possesses some of the properties that it does, and why dances changed in the manner they did. As it is for dance, so it is for sculpture, acting, music, computer games, or just about any aspect of culture. Biology provides no substitute for a comprehensive historical analysis. However, our understanding of the underlying biology feeds back to make the historical analysis so much richer and intelligible.
References: Mesoudi et al. 2004, 2006; Mesoudi 2011.
This is perhaps because the arts place a particular premium on creativity and originality, and also possibly because evolution has a bad name in some areas of the humanities (see Laland and Brown 2011 for historical details).
Morgan 1877; Spencer 1857, (1855) 1870; Tylor 1871.
Darwin 1859.
A story pervades that the apple logo found on iPhones and Macintosh computers is a tribute to Alan Turning, the father of modern computing, who died by biting into an apple laced with cyanide. While differing opinions abound, this story sadly would appear more likely to be an urban legend than the truth.
Turing 1937; Minsky 1967, p. 104.
Gould and Vrba 1982.
Hoppitt and Laland 2013.
Galef Jr. 1988.
Strictly, the “most learning” referred to here should read most “instrumental” (or “operant”) learning.
Pullium and Dunford 1980.
For a recent review, see Brass and Heyes 2005.
Laland and Bateson 2001.
Rizzolatti and Craighero 2004.
Striedter 2005.
Iacoboni et al. 1999.
This form of social learning is typically referred to as “observational conditioning.”
Bronowski 1973.
Striedter 2005.
Deacon 1997.
Striedter 2005; Heffner and Masterton 1975, 1983.
Barton 2012.
Prints of these are still available for $20 each (https://secure.donationpay.org/chimphaven/chimpart.php).
For an accessible animal behaviorist’s assessment of elephant painting, see http://www.dailymail.co.uk/sciencetech/article-1151283/Can-jumbo-elephants-really-paint—Intrigued-stories-naturalist-Desmond-Morris-set-truth.html.
Tourist camps in Thailand that boast “painting elephants” have attracted criticism from animal rights activists who express concerns that the training regimes may be cruel to the animals. The tourist camps, in response, claim that the elephants are mentally and socially healthy.
Footage of painting elephants and chimpanzees can easily be found on YouTube.
Stoeger et al. 2012.
Plotnik et al. 2006.
Consistent with this argument is the finding that rhesus monkeys can be trained to produce mirror-induced, self-directed behavior resembling mirror self-recognition, with appropriate visual-somatosensory training that links visual and somatosensory information (Chang et al. 2015).
Striedter 2005.
Hugo (1831) 1978, p. 189.
The first use of perforated shells as beads is dated to over 100,000 years ago (D’Errico and Stringer 2011, McBrearty and Brooks 2000). The shells frequently have geometrical patterns cut into them, and have been colored with pigments. The use of red ochre as a painting material dates back further. Engraved ostrich shells dated to 60,000 years ago have been found in South Africa (Texier et al. 2010). By around 45,000 to 35,000 years ago, art was widespread (at least in western Europe) and highly consistent, and comprised pierced beads of ivory and shells, etched and carved stones, engraved decorations on bone and antler tools and weapons, and sculpted statues of animals and female figures, which were thought to be fertility symbols. However, the most evocative and striking images of Paleolithic artwork are unquestionably the magnificent cave art paintings discovered in several European countries (Sieveking 1979). Many caves are renowned for their artwork; the oldest include the spectacular paintings found at the Le Chauvet Cave in France, dated to 30,000 years ago. Perhaps the most remarkable collection of cave paintings is at Lascaux in Dordogne, France, where an incredible 2,000 painted images of horses, deer, cattle, bison, humans, and a 5-meter high bull, have been dated to 18,000–12,000 years ago. Also renowned is the beautiful painted ceiling of the cave at Altamira, in northern Spain. This was the first cave art to be discovered, in 1879. The art at Altamira, which has been dated to around 19,000–11,000 years ago, comprises stunning representations of bison, horses, and other large animals, with extraordinary use of colors and shading to indicate depth. The quaint story of its discovery details that the paintings, which are on a low ceiling, were initially missed by the team of archaeologists, but were spotted by one of the team’s 8-year-old daughter; she was the only individual small enough to stand erect and still look up at the ceiling (Tattersall 1995).
There is the expected regional variation, with particular techniques, styles and materials used in specific locations, indicating that the art expressed particular meanings that were socially learned and shared by the members of the community (Zaidel 2013). The paintings record for posterity what dominated the minds of those peoples, the animals that they lived by and stalked, and the power and potencies that those creatures symbolized. The correspondence between those species that were painted and those that have been independently verified as present is sufficiently tight that ecologists now use paleolithic art to infer species distributions (Yeakel et al. 2014). There is also continuity over time, as the same methods and skills are reproduced throughout the millennia. For instance, the European cave art tradition lasts tens of thousands of years, while the use of pigments, such as red ochre, in rock paintings is still used today (McBrearty and Brooks 2000). These traditions were passed on from one generation to the next, picking up innovations from numerous creative, avant-garde, or radical individuals along the way, in a continuum that stretches back to the origins of our species, and forward to those exhibits found in today’s contemporary art museums. Finally, the observed patterns of change are historically contingent. Like technology, novel art does not spring forth fully formed from the mind of the maker, but rather is a creative reworking of existing artistic forms.
The company was Ballet Rambert until 1966, and then Rambert Dance Company until 2013.
Laland et al. 2016.
Byrne 1999, Laland and Bateson 2001, Heyes 2002, Brass and Heyes 2005.
Carpenter 2006.
Heyes and Ray 2000, Laland and Bateson 2001.
Brown et al. 2006.
Some animals’ movements, such as the coordinated jumping and wing-flapping courtship of pairs of Japanese cranes, or the communication system of honeybees, possess some dance-like properties, but these are species-specific behavior patterns that have evolved to fulfil quite separate functions.
Nettl 2000.
Fitch 2011.
Patel 2006.
In contrast to this hypothesis, I also place emphasis on motor imitation.
Doupe 2005, Jarvis 2004.
Cacatua galerita eleonora
You can see Snowball on YouTube at https://www.youtube.com/watch?v=cJOZp2ZftCw.
Moore 1992.
Patel et al. 2009.
Schachner et al. 2009, Patel et al. 2009, Dalziell et al. 2013.
Hoppitt and Laland 2013.
Fitch 2013.
Hoppitt and Laland 2013.
Cook et al. 2013, Fitch 2013.
Dalziell et al. 2013.
Indeed, in a number of both classical and modern dance forms, motor imitation is key. Dancers are required to copy the process but not the product of the movement, and operate under socially constrained rules that depend critically on the technique and style of their particular school (e.g., Martha Graham vs. Merce Cunningham styles).
Feenders et al. 2008.
Clarke and Crisp 1983.
Clarke and Crisp 1983, Dudley 1977.
Clarke and Crisp 1983.
Laubin and Laubin 1977.
Clarke and Crisp 1983.
Correction may also occur through manual shaping of the dancer’s body by the teacher or, to a lesser degree, through verbal instruction. In some dances, specific steps are given verbal labels, as in ballet in particular, which has its own elaborate glossary of terms, such as fondu, arabesque, chassé, and grand jeté, each with its own characteristic movements. Except in those cases, however, describing bodily movements with words is typically difficult. Hence, when dance instruction is given verbally, it is often through the use of imagery, where again an ability to relate one’s own bodily movements to another object, emotion, or entity is required.
I am indebted to Nicky Clayton for drawing my attention to many of these points.
Whalen et al. 2015. | <urn:uuid:e15ee41e-5b63-4ab7-b32f-84e538e8be2a> | CC-MAIN-2018-34 | https://evolution-institute.org/darwins-unfinished-symphony-how-culture-made-the-human-mind/?source=tvol | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212639.36/warc/CC-MAIN-20180817163057-20180817183057-00302.warc.gz | en | 0.95182 | 9,720 | 3.65625 | 4 | {
"raw_score": 2.9536068439483643,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Art & Design |
swim back home
Anatomy and Physiology
III. Anatomy and Physiology
Figure 5 - Nomenclature of some external features (fins)
Figure 6 - Schematic view of gills and internal organs
A specialized structure is the gas (swim) bladder - specialized for buoyancy adjustment, it may be connected to esophagus. Also, it may have a "gas" gland which enables release or resorption of gas from blood to and from the gas bladder.
Anatomy and Physiology by System
a. Outermost layer is a mucous (cuticle) layer composed of mucus, muco/polysaccharides and immunoglobulins.
b. Epidermis (malpighian cells throughout epidermis retain capacity for for cell division)
d. Scales (calcified plates originating in the dermis and covered by the epidermis)
Figure 7 - Schematic view of skin of fish
1. operculum (structure which covers
a. (There is a set of 4 gill arches on each side of teleosts) 1° & 2° lamellae on each gill arch
The thin epithelial layer lining the filamentous gill structures is very thin and allows gas exchange to occur here.
Schematic view of histologic features of gill arches in relation to water flow (after Reinert, 1992)
Figure 10 - Gill filaments of bluegill illustrating primary and secondary lamellae. H&E 16X
The gills also regulate exchange between salt and water and have a major role in the excretion of nitrogenous waste products.
|Types: Red (slow, cruising)|
|White (strenous bursts of swimming, rapid fatigue)|
|2.||bone can be cellular or acellular in teleosts|
|Figure 12 - Transverse section through caudal portion of body (posterior to body cavity). Note vertebral column with caudal vein ventral to column. Note red and white muscle.||Figure 13 - Cellular bone. Note periosteum, osteocytes, blood vessel and adjacent cartilage. H&E, 67X|
1. heart (sinus
venosis, one atrium, one ventricle and elastic bulbus arteriosis,
2. circulation - blood flows from the heart to the ventral aorta to the afferent branchial arteries to the gills for oxygenation and progresses via the efferent arteries to the dorsal aorta.
Figure 14 - Fish have a "2 chambered" heart; one atrium and one ventricle, located within a pericardial sac.
1. stomach (carnivorous
fish have a short digestive tract when compared with herbivorous fish.)
2. pyloric ceca (blind-ended finger-like projections extending outward from pyloric valve region)
3. intestine (not possible to divide the intestine into large and small intestine)
4. liver with gall bladder (doesn't have the typical lobular architecture that is present in mammals. There are no phagocytic (Kupffer cells) in the liver.
5. pancreas (may be interspersed with mesentery of pyloric ceca or along portal veins of liver)
Figure 15 - Schematic ventral view of digestive tract
|Figure 16 - Stomach with numerous pyloric ceca leading to their opening into the pyloric region.||Figure 17 - Vacuolated hepatocytes secondary to metabolic stress or disturbances.|
1. hematopoiesis, occurs especially spleen, kidney
(hematopoietic activity is not found in medullary cavity of bones)
2. Blood cells erythrocytes have nuclei. Fish have thrombocytes (no platelets)
|Figure 18||Figure 19|
|Schematic view of location of hematopoietic organs in fish|
Fish have a thymus. They have lymphoid tissue but no lymph nodes.
2. cell mediated response
3. humoral antibody (IgM) response
1. kidneys - A primary function of the kidney is osmoregulation. In fresh water the kidney saves ions and excretes water. In saltwater fish, the kidney excretes ions and conserves water. The majority of nitrogenous waste is excreted through the gills. The "head" kidney (most cranial portion) contains predominantly hematopoietic and lymphoid tissue; the posterior kidney contains the excretory tissue with some hematopoietic and lymphoid tissue.
Figure 20 - Kidney, posterior excretory portion, 10X
|Figure 21||Figure 22|
organs of female showing common pattern in teleosts (left) and pattern seen
in salmonids (right)
(after Hoar, 1969)
|Figure 23 - Reproductive organs of male (after Hoar, 1969)|
|Figure 24 - Gross view of ovary showing multiple large eggs.||Figure 25 - Ovary with eggs in varying stages of development H&E 16X||Figure 26 - Ovary with several eggs, including one undergoing dissolution. H&E 67X|
1. Adrenal Gland - The adrenal cortical tissue is represented by the interrenal cells. The adrenal medullary cells may vary is location.
2. Thyroid Gland - Thyroid follicles are very similar to mammalian thyroid tissue. Thyroid follicles are widely distributed throughout the viscera.
3. Pancreas - Islets of Langerhans may be grossly visible.
Figure 27 - An overview of the location of various endocrine glands is shown below. (after Bond, 1979)
4. Corpuscles of Stannius (located in kidney) - Secrete hypocalcin which acts with calcitonin to regulate calcium metabolism (the parathyroid glands are absent in fish).
|cranial nerves (ten)|
L. Organs of special sense
Figure 28 - Semicircular canals and membranous labyrinth
In the lateral line, neuromasts (sensory cells) act as mechanoreceptors within the canal and relay information to the brain. There are tiny pores which provide a passage from the outside environment to the canal). | <urn:uuid:20236cd7-cc51-4f5a-96f3-c8a6ed3b58c3> | CC-MAIN-2015-14 | http://vet.osu.edu/assets/courses/vm608/anatomy/anatomy.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131292567.7/warc/CC-MAIN-20150323172132-00065-ip-10-168-14-71.ec2.internal.warc.gz | en | 0.847511 | 1,344 | 3.296875 | 3 | {
"raw_score": 2.3030192852020264,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Science & Tech. |
One of the most commonly used calibration instruments in the process industry are pressure gauges. Regular calibration of pressure gauges is important because these being mechanical instruments, they are prone to drifting off due to mechanical stress. Calibration of pressure gauges is a crucial process and should be done with utmost care preferably by calibration experts. Listed below are 10 most important factors to keep in mind for pressure gauge calibration:
- Classes of Accuracy—It is vital to determine the specified accuracy class of your gauge while calibrating it. In most of the cases, the accuracy class specification is the ‘% of range’, which means that if the class of accuracy is 1 percent and the scale range is most probably zero to 100 psi, the accuracy is +-1 psi. It is extremely important to know the accuracy class of the gauge which you are calibrating, because it will help determine the acceptable accuracy level and effect of other calibration procedures.
- Pressure Medium Used—A gas or a liquid is one of the most commonly used pressure media for calibrating a pressure gauge. Gas is regular air and preferred liquid is either oil or water. The choice of pressure media depends on the media that is being used during the process and is connected to the gauge being calibrated. The selection also relies heavily on the pressure range.
- Contamination—Pressure media used for calibration should not be prone to contamination. Dirt is a common contaminant in such cases and it could be present inside the gauge which could disrupt the process and harm the calibration equipment.
- Height Difference—The height of gauge that is being calibrated and the height of the calibration equipment should be considered before beginning the calibration process. This is important because the height difference can cause error due to the hydrostatic pressure of the pressure media. In case it’s impossible to set the equipment and the gauge at the same height, than the effect of height difference should be calculated and noted during calibration.
- Leak Testing—Before initiating the calibration, it is extremely crucial to perform leak test of the piping. If any leaks are present, they may give way to errors. A simple leak test is to pressurize the system and then let the pressure stabilize. The pressure should be monitored during this test to ensure that it doesn’t drop too much.
- Adiabatic Effect—You should be conscious of the adiabatic effect while calibrating your pressure gauge. For instance, in a closed system which comprises a gas pressure media, the adiabatic effect can be present. The temperature of the gas will affect the volume of the gas, which in turn affects the pressure. In such situations, the pressure can rise quickly which causes the temperature to rise as well. As the temperature of the gas starts to fall, its volume reduces which causes the pressure to drop. This drop in pressure may seem like a leak, but is actually the result of the adiabatic effect.
- Torque Force—This factor is crucial especially in case of torque sensitive gauges. Excessive force should not be used when connecting the pressure connectors to the gauge. If more than necessary force is used, the gauge can get damaged. It is best to use appropriate tools, adapters/seals.
- Mounting Position – Since pressure gauges are mechanical instruments, their position during calibration affects the reading. Hence, it is strongly recommended to calibrate a gauge in the same position which it is used in actual process of measuring. Also follow the manufacturer’s instructions strictly related to mounting and operating positions.
- Pressure Generation – For accurate calibration of a pressure gauge, pressure generation is must so that necessary pressure is applied to the gauge. This can be done by:
- Pressure hand pump
- Pressure regulator with a bottle
- Dead weight tester
- Exercising – Pressure gauges are known to have some amount of friction in their movement which can cause changes in their performance. Hence, if the gauge has not been applied with pressure in some time, it should be "exercised" before calibrating it. To do so, a nominal max pressure should be applied to the gauge and allowed to sit for a minute before venting it out. This ‘exercise’ should be repeated up to 3 times before carrying out the calibration process.
Calibration tells you exactly how erroneous your pressure ensure gauge is. Hence, to that your pressure gauge consistently delivers accurate results, it should be calibrated frequently by calibration experts.
Edward Simpson works for RS Calibration Services and has a knack for finding faults in machines and does not rest until they are rectified to perfection. He lives in Pleasanton, CA and loves to write about how machines work and about the importance of proper care and calibration of equipment. When he's not working or writing, he loves to run to stay fit. | <urn:uuid:66022498-d126-4cdd-9ad2-787d4ecc98d6> | CC-MAIN-2020-40 | https://www.impomag.com/maintenance/article/13250417/calibrating-a-pressure-gauge-keep-these-important-points-in-mind | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00254.warc.gz | en | 0.936727 | 989 | 2.78125 | 3 | {
"raw_score": 2.1543352603912354,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Industrial |
The Human Liver
The liver is an internal organ which lies in the abdominal cavity of the body of animals. In humans, it is the largest glandular organ of the body, and generally weighs about 3 lb (1.35 kg).
Physically, the liver looks like a large reddish brown lump which is divided into four unequal lobes. Liver tissue is made up of hepatic cells (hepatic is an adjective denoting something concerned with the liver) which are grouped into lobules; each lobule is served by a capillary which is a minute subdivision of the two large vessels which supply the liver with blood: the hepatic artery which brings blood full of oxygen from the aorta; and the portal vein, which brings blood full of digested food from the small intestine.
From its strategic location between the gut and the rest of the body, the liver plays many important roles in the body, which can be grouped into three broad areas:
- metabolism: The liver metabolizes carbohydrates, fats, proteins from digested food, storing them, using them to synthesize new proteins, or excreting them. It manufactures and secretes bile, which is stored in the gall bladder and released into the small intestine, where it is used to emulsify fats. The liver handles the conversion of glucose to glycogen for storage and regulates the proper level of glucose in the blood. As it breaks down digested proteins, it produces and eliminates urea, thereby removing ammonia from bodily fluids. Essentially, the liver helps maintain homeostasis in the body, regulating the body's supply of important nutrients and hormones.
- filtration: The liver contains phagocytic Kupffer cells, which remove substances from the blood for excretion - bacteria, endotoxins, viruses, antigens, and lots of other harmful stuff, including bilirubin, the degredation product of the red dye in blood. The liver detoxifies the body of drugs.
- storage: The liver generally stores about 600 ml of blood, though it can hold more if it needs to, for example in times of emergency; in addition, it stores vitamins and minerals.
Liver disease encompasses a wide range of conditions. The most common are hepatitis, an inflammation of the liver that can have chronic effects, and cirrhosis, a chronic progressive inflammation that leads ultimately to liver failure. In addition, long term alcohol abuse can negatively impact the liver. There are also rarer genetic disorders that harm the liver, including hemochromatosis, Wilson's disease, and cystic fybrosis.
The first liver transplant was performed in 1963, and today the operation has become common, with the majority of patients surviving the dangerous first year. In 1994 a bioartificial liver, part cloned liver cells, part machine, was used; it's kind of like a kidney dialysis machine, and can support patients with liver failure who are waiting for transplants. In addition, a liver can regenerate, and up to 75% of it can be safely removed and will grow back again. This procedure, called liver resection, provides a way to cure patients with tumors of the liver.
The Metaphoric Liver
The liver has long been seen as an important organ of emotion and even thought. The Greek philosopher Galen, whose second century ideas remained the basis for western medical thinking till the seventeenth century, viewed the liver as the seat of the vegetative soul, an ancient plant-based soul which was retained by higher beings. The liver, he thought, received food and converted it to natural spirits, which it then sent to the heart, from whence it reached the rest of the body.
Shakespeare saw the liver as a seat of bitter anger and bile. Old English used the adjective liverish to refer to a crabby or grouchy person, based on the belief that the liver could produce an excess of bile, giving someone a reddish complection and peevish manner.
The Edible Liver
People have long eaten liver from cows or calves, pigs, lambs, chickens, and geese; livers from younger animals will tend to be more healthy, because the livers of older animals have had much longer to accumulate nasty chemicals, hormones, and medicines that the animal might have been fed. In addition, liver from younger animals will be paler in colour, with a milder flavour and odour and more tender texture than the liver of adult animals.
Goose liver is the most expensive of the edible livers; it's usually known by the swishy French name foie gras, and, though delicious, the ways that geese are fattened and slaughtered can be reprehensible. *Sigh*
Liver should be cooked quickly, for example by lightly sauteing; cooking longer tends to toughen it. Liver is rich in iron, protein, and vitamins A and B.
Ode to the Liver
fragment of a poem by Pablo Neruda, translated by Oriana Josseau Kalant
I sing to you
and I fear you
as though you were the judge
and if I can not
surrender myself in shackles to austerity,
in the surfeit of
or the hereditary wine of my country
to disturb my health
or the equilibrium of my poetry,
giver of syrups and of poisons,
regulator of salts,
from you I hope for justice;
I love life: Do not betray me! Work on!
Do not arrest my song. | <urn:uuid:22172caf-61e4-4c66-92f7-be7641fbd297> | CC-MAIN-2018-13 | https://everything2.com/title/Liver | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00546.warc.gz | en | 0.936584 | 1,144 | 3.890625 | 4 | {
"raw_score": 1.9956272840499878,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Health |
Negotiators from the WIPO’s 186 members, plus accredited observer bodies, are currently working on a draft international treaty meant to benefit designers seeking design protection by simplifying and harmonizing a number of filing rules that currently vary greatly by country.
The work in the WIPO Standing Committee on the Law of Trademarks, Industrial Designs and Geographical Indications (SCT) seeks to establish a number of common rules and procedures for design applications in countries around the world. This process could lead ultimately to a Diplomatic Conference, where treaties are finalized.
Nine major improvements are under debate in the draft treaty, ranging from the modes of representation or illustration for a design in an application, to simplifying the creation and filing of legal documents.
Delegates also discuss issues related to the implementation of a future treaty, including technical assistance and capacity building commitments, such as training and help for establishing the necessary technical infrastructure for developing and least developed countries that will implement changes under a new treaty.
The goal of the draft treaty is to make it easier and cheaper for industrial-design holders to protect their work around the world in that it would codify various-established registration practices among eventual signatories.
An industrial design is the ornamental or aesthetic aspect of an article. The design may consist of three-dimensional features, such as the shape or surface of an article, or of two-dimensional features, such as patterns, lines or color.
Industrial designs are applied to a wide variety of products of industry and handicraft: from technical and medical instruments to watches and jewelry from housewares and electrical appliances to vehicles and architectural structures; from textile designs to leisure goods.
Many of the world’s iconic products are also good examples of items whose “look” can be protected: Apple Inc.’s IPhone; Chairs by Charles and Ray Eames; Volkswagen Group’s Beetle, Industrial designs enhance a products attractiveness and appeal, adding to its commercial value and increasing its marketability.
Protecting an industrial design helps to ensure a fair return on investment. An effective system of protection also benefits consumers and the public at large by promoting fair competition and honest trade practices.
Further, protecting industrial designs helps economic development by encouraging creativity in a country’s industrial and manufacturing sectors while contributing to commercial expansion and increasing export of national products.
An industrial design is primarily of an aesthetic nature and does not protect any technical features of the article to which it is applied. To be protected under most national laws, an industrial design must be new and original. Novelty or originality is determined by comparing the application to what products or protected designs are already in existence.
The Hague System for the International Registration of Industrial Designs, administered by WIPO, provides a mechanism for registering a design in countries and intergovernmental organizations that are member of the Hague Agreement, currently numbering 60 contracting parties.
The system allows owners of an industrial design to obtain protection in several countries by simply filing one application with the International Bureau of WIPO, in one language, with one set of fees in one currency (Swiss Francs). An international registration produces the same effects in each of the designated countries, as if the design had been registered directly with each national office, unless protection is refused by the national office of that country.
The Hague System simplifies the management of an industrial design registration, since it is possible to record subsequent changes or to renew the registration through a single procedural step with the International Bureau of WIPO.
While WIPO provides a single filing system among 60 contracting parties, there are still many differences in national filing regulations among WIPO’s 186-nation membership, and the proposed new treaty aims to harmonize many of them. International filing regulations for patents and trademarks are already covered by similar treaties, overseen by WIPO.
Where we’re going
During the 2013 Assemblies of the Member States of WIPO (Fifty-Second Series of Meetings) the WIPO General Assembly decided to request the Standing Committee on the Law of Trademarks, Industrial Designs and Geographical Indications to finalize its work on the text of the basic proposal for a design law treaty. The extraordinary session of the General Assembly in May 2014 will take stock of progress made and decide on whether to convene a diplomatic conference in 2014 in Moscow. The Russian Federation has offered to host any diplomatic conference.
Here is a quick glance at some of the topics under discussion:
1. Choosing how to represent or illustrate a design. An applicant will be able to choose whether to illustrate or represent the design using drawings, photographs, other visual media (for example, computer-aided design) or a combination of media.
2. Reducing number of copies of each illustration required for filing. An applicant will not have to submit more than three copies of each illustration or representation when filing an application (or just a single copy in the case of e-filing).
3. Registering a set of related designs in a single application. It will be possible to register several related designs in a single application, rather than register each individual design in a separate application. There will be safeguards in place to ensure that the original filing date is protected in the event that one of the individual designs is not accepted.
4. Gaining a secure filing date from which your design is protected. It will be simpler to gain a secure filing date for the protection of your design. In order to gain a secure filing date, you will only need to provide details on the applicant, an illustration of the design and possibly a fee.
5. Registering a design six months after public disclosure. It will be possible to register a design up to six months after a new design has been publicly released.
Or as an alternative option:
6. Registering a design 12 months after public disclosure. It will be possible to register a design up to 12 months after a new design has been publicly released.
7. Obtaining secrecy for six months after filing an application. It will be possible to keep a design secret for at least six months after filing a new design.
8. Standardizing the information needed to submit (or make changes to) a design registration. The information needed to submit a new application will be standardized internationally.
9. Simplifying the procedures to present legally valid documents in another country. There will be a simplification to the requirements for creating and signing legal documents.
Last Updated: January 2014 | <urn:uuid:bd68cb7b-d2d2-433f-b198-5dbe9f056f04> | CC-MAIN-2014-23 | http://www.wipo.int/pressroom/en/briefs/sct.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271648.4/warc/CC-MAIN-20140728011751-00327-ip-10-146-231-18.ec2.internal.warc.gz | en | 0.927017 | 1,335 | 2.53125 | 3 | {
"raw_score": 2.918203115463257,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Crime & Law |
by Jonathan Hull
The following essay is part two of a three part series that explores how something as commonplace as dust can profoundly effect natural systems. Part one, which can be found here, explored its effect on the biosphere as a whole. This second part takes a microscopic look at dust on plant leaves. The third and final essay will be published next week. This essay will take what has been discovered in parts one and two and consider practical applications, by way of foliar sprays, in managing fertility in Ohio gardens. Foliar sprays have the potential of becoming one of our most powerful tools available in creating healthy gardens.
Our biosphere is an incredibly complex, dynamic and ever adapting play of forces. In the first part of this series, we met the main actors of this play in grand elemental forms: sun, earth, water and wind. The title role is played by life; the adaptive tissue that knits these elemental forces into a global system.
We learned that minerals in dust are essential to the function of the Amazon rainforest as well as numerous other ecosystems. Changing distributions of this dust and its effects on life can even modulate changes in global climate patterns ultimately driven by the earth’s orientation to the sun.
In the next part of our investigation, these same elemental actors will reprise their roles in another complex and dynamic play of forces. Only it will play out not on a global scale but on a microscopic one: the surface of a leaf.
[Figure 1: Microscopic detail of Leaf. From: Stomata Of Lavendula Dentata, Sem Photograph
Recall that this whole investigation started when I realized that the role of dust in global nutrient cycling had implications for the use of foliar applications in the garden. These are a broad range of techniques that attempt to boost plant health by changing the microscopic environment of its leaves. One technique that seemed to work well in my garden was one that covers plant leaves with a spray of minerals in a solution. I was a tentative proponent because I did not understand how this could work so well.
When I learned how much dust moves around the globe and how important it is to any number of ecosystems; the results from foliar sprays no longer seemed incidental. At the time it was pure speculation, but the thought occurred that it was entirely likely that plants adapted the ability to absorb minerals that were deposited on its leaves from atmospheric dust. This led me to research if this was indeed true and in doing so I learned the features of several important processes that take place on the leaf. These features will be incredibly important to my future use of foliar applications and to the realization of their maximum benefit.
II: When the Dust Settles
When you consider where plants first emerged in the history of life, it is not surprising that in some degree, they can all absorb nutrients through their leaves. Plants first evolved in bodies of water and most aquatic plants use their leaves as the main sites for mineral uptake.
Once inside the leaf these minerals can move throughout the plant via the liquid sap that connects the various metabolic functions that happen inside of it. This flow of minerals is critical to the function of a plant. For instance the electrical resonance of magnesium is essential to the structure and function of chlorophyll. Now in the case of aquatic plants, their leaves are structured in a way that allows them to be relatively open to the flow of nutrients. Their leaves can be open to this flow because the surrounding water buffers them from the effects of the sun and wind.
[Figure 2: Aquatic Plant. From: … quality for aquatic plants and fish | Activities and Information]
However, when plants moved onto land they no longer had this luxury. Separate and specialized structures were adapted for gathering the sunlight, nutrients and carbon dioxide needed for their metabolism. Roots, protected underground from the harsh effects of the sun and wind, became the main site for mineral uptake in land plants. Terrestrial plants still need to gather energy from the sun, so they adapted a waxy covering on their leaves called the cuticle. This was done mainly to protect the plant from the loss of too much water through its leaves by evaporation. An interesting side note for this discussion: this layer also protects the plant from excessive leaching of nutrients from its leaves by rain. In any case, this protective layer made leaves in terrestrial plants a lot less open to the flow of nutrients than those found in aquatic plants.
[Figure 3: Water on a Leaf. From: Beyond the Human Eye: Plant Cuticles]
This cuticle layer is the main barrier to the entry of nutrients. It is by no means impenetrable and there are practical strategies for moving nutrients through it, but this mechanism is outside the scope of this essay. Instead we will look at a more promising avenue. You see, the cuticle layer is not continuous because leaves cannot be completely closed to the outside world: plants also use leaves to breathe.
Plants breathe, oxygen out and carbon dioxide in, through openings called stomata. (See Figure 1 above). Thousands of these microscopic mouths can be found on the surface of leaves. Stomata also serve as thermo-regulators by allowing the plant to transpire water through these openings to cool the plant when needed. A self-regulating system exits within the leaf where stomata open and close depending on environmental conditions. The system balances the trade-off between breathing and losing too much water to evaporation.
When a stoma opens, it forms one tiny point of interface between the water inside the plant and the atmosphere. It might seem obvious that the stoma would be a convenient avenue for the entry of foliar minerals. But it is not that simple. In fact, for some time it was considered impossible for nutrients to enter the plant in this way. Although there are thousands of stomata on a leaf, added together they still comprise a tiny surface area. The dust particle would have to land in just the right place to make contact. Additionally, the vapor pressure from evaporating water flows out, which would seem to prevent the movement of anything in.
Recent research has found that a unique process that occurs on the surface of leaves in the presence of dust changes how the stomata operate. This process transforms stomata into a main avenue for the entry of nutrients. To explain this phenomenon, we need to understand what happens on the surface of a leaf where dust collects. How,we need to consider, does dust interact with a leaf.
The leaves of many plants have adapted structures to facilitate the collection of dust onto their surfaces. These include microscopic ridges in leaves that trap dust and tiny hairs that grow out of the leaves, called trichomes, which change aerodynamics on a microscopic level to pull even more dust to the leaf surface. With such microscopic adaptations, plant surfaces have evolved into the major sink for atmospheric dust. To give you an idea of the amount, one study in Chicago found that, urban trees, which occupy 11% of the city area, remove about 234 tons of dust from the air per year!
[Figure 4: Leaf Trichomes. From: Nettle Leaf Trichomes, Sem Photograph]
These particles are deposited on the leaf from dry dust in the atmosphere but also from particles left by evaporated rain droplets. Which come to think of it, you might wonder how much of this dust is washed off by rain. One study found that although the large particles of dust were washed from the leaves of trees the majority of smaller particles remained in the canopy. The study reported here, found that 75% of nitrogen that was deposited on a spruce forest never made it to the soil but was retained in the canopy!
Most of the particles of dust deposited on leaves are of a certain type that readily absorb moisture from the atmosphere. These are technically termed hygroscopic particles. On the leaf’s surface, they function in much the same way as they do in the creation of raindrops in the atmosphere; they serve as the nuclei for the condensation of water vapor. We might think of dust as dry, but in this case when considered at their own scale, they are positively soggy. Even at relatively low humidity levels, dust condenses water vapor on leaves where it would not otherwise stick.
Taken together, the moisture these particles gather produce the provocatively named “breath figures”: microscopically thin films of water that are invisible to the naked eye. These thin films of water remain persistent on the surface of plant leaves even under surprisingly hot conditions. Breath figures are not pure water, but a solution of the mineral particles that attract water from the air.
Breath figures have enormous implications for uptake of the nutrients into the leaf – especially through the stomata. In a newly opened “pristine” leaf, the liquid sap and all of the functions it connects are largely separate from anything that might occur on the leaf surface. Stomata on such leaves are just a tiny ports to the atmosphere. However, this new leaf quickly attracts dust particles, which, in turn, attract tiny films of water. Now when the stomata open, the liquid sap inside the plant connects to breath figures. The mineral can now land anywhere on the leaf and be connected to the stomata, and so connected to all of the functions that occur inside the leaf.
This continuous film of liquid that runs from the exterior of the leaf through the stomata and into the interior of the leaf is even thought to play a role in activating the opening of stomata themselves – a feedback loop that furthers the development on this continuous liquid connection.
Solutions always try to even themselves out. If there are differences in concentrations within the solution, called concentration gradients, the liquid will move from low to high concentration and high to low. The force of movement caused by concentration gradients can overcome the force of pressure moving water out of the stomata by evaporation. Thus, a flow of nutrients from plant exterior to interior becomes possible.
Plants have adapted different responses to the formation of these continuous liquid connections. Certain plants are particularly reliant on inter-leaf nutrient cycling. In the dusty grasslands of South Africa researches have shown how trees can establish themselves in poor soils, even where grasses might otherwise have a competitive advantage.. As the trees grow taller, they collect more dust and thrive that much more. The cycle reinforces itself in a positive feedback loop.
However, it can also be the case that plant leaves provide too much of a good thing. If there are too many hygroscopic particles activating too many stomata, and so increasing evaporation through the plant, it can significantly lower its ability to deal with drought. In this case we have a negative feedback loop.
Plants that live near the coast have had to adapt to this problem. These plants can receive heavy deposits of sea minerals from salt spray that is carried off the ocean by winds. These plants have adapted leaf structures that rapidly shed particles so that they are not desiccated by the formation of continuous liquid connections and the resultant opening of stomata.
One issue that I will briefly mention: plants have adapted to environments that over evolutionary timescales receive relatively consistent amounts of dust. The problem is that in last 200 years there has been an 270% increase in particulate matter in the atmosphere – an increase caused by industrialization. Plants that had adapted to collect a steady stream of dust may now be overloaded. This increase in human produced particular matter, which has strong hydroscopic properties, has been theorized to be major factor in the decline of forests the world over.
We will consider all of these factors in the final essay in this series. Here we will discuss the practical use of all this knowledge in creating healthy and vibrant gardens. It should be noted, however, that we have highlighted only one tiny portion of a huge set of interconnected processes that effect how plants absorb nutrients through their leaves.
If one wants to master the use of foliar applications there is so much more that needs to be understood and even more that has yet to be discovered. Perhaps a topic for future essays, but as such it is out of our scope. However, what has been sketched out so far is a good foundation for incorporating this tool into a general gardening practice. | <urn:uuid:67e83077-f3ce-4486-a1e8-5b334ce245a8> | CC-MAIN-2017-39 | http://www.gardenopoliscleveland.org/2016/03/winds-from-africa-part-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689975.36/warc/CC-MAIN-20170924100541-20170924120541-00621.warc.gz | en | 0.963547 | 2,534 | 3.3125 | 3 | {
"raw_score": 2.998007297515869,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Determining the dates of Ramadan
Ramadan (pronounced "rom-a-don"; a.k.a. Ramazan) is the
holiest period in the Islamic year. It commemorates the ninth lunar month in the
year 610 CE when revelations began from God, via the angel
Gabriel, to the Prophet Muhammad. These revelations memorized by Muhammad and
were later written down as the Qur'an.
There is no consensus among Muslims around the world about how to determine
the first day of Ramadan.
Determining the starting date of Ramadan:
Ramadan is the ninth lunar month of the year. It begins at the time of the
new moon -- a.k.a. the time of conjunction. This happens when the Earth, Moon,
and Sun are lined up in that order.
The beginning of Ramadan has traditionally been based on Hilal Sightings --
the detection of the crescent new moon by the human eye without benefit of optical
aids. For some Muslims around the world, the sighting of the moon in Saudi
Arabia marks the start of Ramadan. For others, it is the moon sighting in their
own country. For others, sighting of the new moon anywhere in the world triggers Ramadan at their location. 5 Still other countries use astronomical calculations to determine
the actual timing of the new moon.
As a result of this lack of standardization, Ramadan began during 1429 AH
(2008 CE) on:
- AUG-31 in Libya and Nigeria
- SEP-01 in 52 countries, including Australia, Mexico, Saudi Arabia, UK, and
- SEP-02 in 8 countries, including India, Iran, Morocco, Oman and Pakistan.
- Canada is not included in the above data. The country was unique in that no
consensus was possible. Ramadan began for some Muslims on SEP-01 and for others
on SEP-02. 1
During 2006-JUN, the Fiqh Council of North America (FCNA)
held a meeting of Muslim jurists, Imams, astronomers and other believers to
discuss whether to use astronomical calculations in the place of visual
sightings. They determined that:
- Sighting the Hilai (the crescent moon) is not an act of 'ibadah (worship).
- Muhammad used ru'yah (sighting of the crescent moon) because most Muslims at
the time lacked the knowledge to calculate the timing of the new moon
- Originally, many Muslim jurists refused to accept astronomical calculations
because "astronomy and astrology were not quite distinct sciences." They
suspected that the predictions might have been based partly on magic and
- During the 20th century, an increasing number of Muslim jurists have
accepted astronomical calculations.
- Calculations are a reliable and accurate method of determining the dates of
Ramadan and the two Eids.
- Adopting this method would:
- Eliminate the problem of erroneous sightings of the crescent moon.
- Allow the dates to be determined far in advance;
- Simplify planning of events;
- Facilitate having Islamic holy days recognized;
- Encourage development of a world-wide Islamic calendar for all Muslims; and
- Improve the unity of Muslims worldwide.
The Council decided that "The new Islamic Lunar month begins at sunset of
the day when the conjunction occurs before 12:00 Noon GMT."
2 This definition covers Ramadan and the other lunar
months in the year.
On 2013-MAY-09, the French Muslim Council (CFCM) agreed to start using astronomical calculations to establish the date of Ramadan and other Islamic holy days. Reuters reported:
"Council President Mohammad Moussaoui said the old method played havoc with French Muslims' schedules for work, school and festivities. France's five million Muslims are the largest Islamic minority in Europe.
"Now all this will be simplified," he said, and promptly announced the Ramadan fast would begin on July 9 this year. ..."
"This is historic. Now all Muslims in France can start Ramadan on the same day," said Lyon Muslim leader Azzedine Gaci." 4
Observances during Ramadan:
- Ramadan has traditionally started at the first visual sighting of the
9th crescent moon of the year by the unaided eye. It lasts for 29 or
30 days, a full lunar month.
- Lailat ul-Qadr (a.k.a. Night of Power) is the anniversary of the
night on which the Prophet Muhammad first began receiving revelations from God.
Muslims believe that this occurred on one of the last odd-numbered nights of
- Id al-Fitr (a.k.a. "Id") is the day which follows the month of
Ramadan. It is pronounced "eed-al-fitter."
It is the first day of the 10th month -- Shawwai. It is a time
of rejoicing. Houses are decorated. Muslims buy gifts for relatives. On this
feast day, Muslims greet each other, saying "Eid mubarak"
(eed-moo-bar-ak), meaning "blessed Eid," and "taqabbalallah
ta'atakum," which means "may God accept your deeds." Many Muslim
communities hold bazaars following prayers.
The approximate dates of Ramadan are listed below from 1938 to 2038. Dates, as
observed in various countries, may be a day or two offset from the following:
In the above table, future dates are estimates.
The abbreviation "H" or "AH"
is used after dates in the Islamic calendar. They stand for "Hegira" or "Anno
base of the Islamic calendar is 622
CE, the year of the Hegira, when the Prophet Muhammad traveled from Mecca to
Medina in what is now Saudi Arabia.
Because Ramadan is based on the lunar calendar, it is observed about 11 days
earlier each year. Thus, about every 35 years, it goes through all four seasons. 3
The fast of Ramadan Overview, purposes, discipline, health concerns, activities, etc.
- "Ramadan 1429," Moon Sighting, at:
- "New Way for a New Moon," Islam City, 2006-SEP-22, at:
- The beginning of Ramadan from 1357-1460 Hijri," Ksulaiman1, 2012-AUG-17, at:
- Tom Heneghan, "French Muslims look to science to determine start of Ramadan," Reuters, 1013-MAY-09, at: http://www.reuters.com/
- "The Islamic Calendar," IslamiCity, 2013, at: http://www.islamicity.com/
Copyright © 2001 to 2013 by Ontario Consultants on
Originally written: 2001-NOV-10
Latest update: 2013-JUL-09
Author: B.A. Robinson | <urn:uuid:c7adb0da-c10c-479e-9bbc-73950cca5a47> | CC-MAIN-2016-22 | http://www.religioustolerance.org/isl_rama1.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464052868853.18/warc/CC-MAIN-20160524012108-00238-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.913362 | 1,470 | 4.03125 | 4 | {
"raw_score": 2.7199134826660156,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Religion |
The Jewish History Of America’s Most Famous Ice Cream
(Getty Images/via The Nosher)
This story originally appeared on The Nosher.
Chunky Monkey. Rum Tres Leches. Banana Nut Fudge.
Who gave the world the gift of these delectable ice cream flavor inventions?
While Italian immigrants are traditionally given credit for opening the first ice cream parlors in the United States in the early 20th century, a series of savvy Jewish entrepreneurs are responsible for the development of gourmet ice cream flavors and their subsequent rise in popularity among the general public.
The name Haagen-Dazs leads many to assume it to be Nordic in origin. Surprise — this internationally renowned ice cream company that has over $2 billion in sales annually was actually the brainchild of a Polish Jew named Reuben Mattus. Just after immigrating to America in the 1920s at the age of 10, along with his widowed mother, Mattus went to work for his uncle’s Italian lemon ice business in Brooklyn. By the early 1930s, the family had expanded its product line to include chocolate-covered ice cream bars, ice cream pops and ice cream sandwiches.
Mattus was convinced he could deliver even higher quality ice cream to his customers, and engaged in a thorough self-education on the science and culinary methodology required to create the richest, most superior frozen confections. Mattus’ real stroke of genius, however, was his recognition that his new ultra-premium ice cream needed a certain cosmopolitan cache to make it appeal to his target audience: sophisticated, moneyed Americans. Thus he decided to give it a “foreign-sounding” name, specifically a Danish(ish) one to pay tribute to the country’s effort to save Jews during World War II.
Remarkably, at about the same time, another Jewish entrepreneur on the other side of the country was launching his own ice cream experiment. Irv Robbins, a Canadian, was self-taught, first gaining skills working in his father’s store and then teaching himself more advanced techniques while crafting ice cream as a lieutenant in the U.S. Navy during WWII. In 1945, Robbins opened the Snowbird Ice Cream parlor, in part with funds from his bar mitzvah (how’s that for foresight?) in Glendale, California, and quickly won rave reviews for the wide variety of flavors. Shortly thereafter, Robbins’ brother-in-law Bert Baskin opened his own shop, and in 1948, the fraternal pair established a joint establishment and soon-to-be company Baskin-Robbins.
They came full circle when, nearly 50 years later, Baskin-Robbins merged with Dunkin’ Donuts — founded by the Jewish entrepreneur William Rosenberg.
Steve Herrell, whose spouse and business partner was Jewish, observed the strides made by Haagen-Dazs in broadening consumers’ tastes for ice cream and decided to capitalize on that momentum by opening an ice cream shop in the 1970s that proffered flavors more exotic than the standard chocolate, vanilla and strawberry. Located in Somerville, Massachusetts, Steve’s served then-novel varieties of creams such as chocolate pudding, cookie dough and peanut butter, as well as afforded customers the opportunity to add “mix-ins” like M&Ms, chocolate sandwich cookies, sprinkles and toffee bits.
And just as Herrell was inspired by Haagen-Dazs, his ice cream innovations would influence other ice cream entrepreneurs, including two mensches who would arguably go on to become America’s greatest creators and purveyors of gourmet ice cream: Ben (Cohen) & Jerry (Greenfield). After witnessing firsthand Herrell perform his mix-in technique at his eponymous parlor, the dynamic duo started their scoop shop in 1978 in Burlington, Vermont. They initially followed Herrell’s style of manually incorporating different toppings, then moved on to churning out pints pre-blended with different candies, baked goods and sauces, thus paving the way for the emergence of Phish Food, Chubby Hubby and other iconic Ben & Jerry’s flavors.
When it comes to ice cream, how sweet it is to be loved by Jews. | <urn:uuid:e835a501-b946-4265-aeb0-ec9982521362> | CC-MAIN-2021-10 | https://jwfoodandwine.com/article/2020/02/19/jewish-history-americas-most-famous-ice-cream | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00183.warc.gz | en | 0.963487 | 898 | 2.84375 | 3 | {
"raw_score": 2.7090566158294678,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Food & Dining |
The number of mutated genes driving the development of cancer is larger than previously believed, a finding that unveils a new challenge for researchers.
CAMBRIDGE, United KingdomThe number of mutated genes driving the development of cancer is larger than previously believed, a finding that unveils a new challenge for researchers. Moreover, each cell type carries many "passenger" mutations that have hitchhiked along with driver mutations, the mutations that cause cancer. Cancer biologists now need to distinguish the drivers from the larger number of passengers. "The human genome is a vast place and this, our first deep systematic exploration in cancer, has thrown up many surprises," said Michael R. Stratton, MB, PhD, co-leader of the Cancer Genome Project at the Wellcome Trust Sanger Institute, Cambridge, UK, which funded the research. "We have found a much larger number of mutated driver genes produced by a wider range of forces than we expected."
The new results emerged from the sequencing of 274 megabases of DNA code that corresponded to more than 518 protein kinase genes in 210 cancers. The research yielded more than 1,000 somatic mutations, including possible driver mutations in 120 genes, most of which were not seen before.
The systematic sequencing of the DNA enabled the researchers to trace the evolutionary diversity of the cancers and discover the new cancer-related genes. "For example, we found that a group of kinases involved in the fibroblast growth factor receptor [FGFR] signaling pathway was hit much more than we expected, particularly in colorectal cancers," said P. Andrew Futreal, MB, PhD, co-leader of the Cancer Genome Project.
The study, published in the March 8 issue of Nature (Greenman C et al: 446:153-158, 2007), focused on the kinases, which can act as relay switches to turn gene expression on and off in cells to control cell behaviors such as cell division. The study found statistical evidence for a large set of mutated protein kinase genes implicated in the development of about one-third of the cancers studied, including cancers of the breast, lung, colon, stomach, ovary, kidney, and testis. "Given that we have studied only 518 genes and limited numbers of each cancer type, it seems likely that the repertoire of mutated human cancer genes is larger than previously envisaged," the researchers said.
The team also found important coded messages within the mutations they studied. The type of mutation varied between individual cancers, reflecting the processes that generated the mutations. Some of these processes were active many years before the cancer appeared. Some of these mutation patterns can be deciphered, such as damage from ultraviolet radiation or the cancer-causing chemicals in tobacco. Others require future decoding.
"The time is right to apply the powerful tools of genomics to obtain a comprehensive view of what goes wrong at the DNA level in cancer," said Francis S. Collins, MD, PhD, director of the National Human Genome Research Institute at the National Institutes of Health. "The important and interesting data on protein kinases in this report . . . further encourage the conclusion that a full assault on the cancer genome will yield many opportunities to revolutionize diagnosis and treatment."
On the Web
For more information on the Cancer Genome Project, please visit: www.sanger.ac.uk/genetics/CGP/. For the COSMIC (Catalogue of Somatic Mutations in Cancer) database of cancer mutations, visit: www.sanger.ac.uk/genetics/CGP/cosmic/. | <urn:uuid:b44ee108-b4ee-4e5e-b617-fbf529bf2834> | CC-MAIN-2021-17 | https://www.cancernetwork.com/view/researchers-find-more-genetic-drivers-cancers-highway | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00226.warc.gz | en | 0.948378 | 735 | 3.0625 | 3 | {
"raw_score": 2.9555816650390625,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
While a growing body of research now suggests that x-ray mammography is causing more harm than good in the millions of women who subject themselves to breast screenings, annually, without knowledge of their true health risks, the primary focus has been on the harms associated with over-diagnosis and over-treatment, and not the radiobiological dangers of the procedure itself.
In 2006, a paper published in the British Journal of Radiobiology, titled "Enhanced biological effectiveness of low energy X-rays and implications for the UK breast screening programme," revealed the type of radiation used in x-ray-based breast screenings is much more carcinogenic than previously believed:
Recent radiobiological studies have provided compelling evidence that the low energy X-rays as used in mammography are approximately four times - but possibly as much as six times - more effective in causing mutational damage than higher energy X-rays. Since current radiation risk estimates are based on the effects of high energy gamma radiation, this implies that the risks of radiation-induced breast cancers for mammography X-rays are underestimated by the same factor.
In other words, the radiation risk model used to determine whether the benefit of breast screenings in asymptomatic women outweighs their harm, underestimates the risk of mammography-induced breast and related cancers by between 4-600%.
The authors continued
Risk estimates for radiation-induced cancer – principally derived from the atomic bomb survivor study (ABSS) – are based on the effects of high energy gamma-rays and thus the implication is that the risks of radiation-induced breast cancer arising from mammography may be higher than that assumed based on standard risks estimates.
This is not the only study to demonstrate mammography X-rays are more carcinogenic than atomic bomb spectrum radiation. There is also an extensive amount of data on the downside of x-ray mammography.
Sadly, even if one uses the outdated radiation risk model (which underestimates the harm done),* the weight of the scientific evidence (as determined by the work of The Cochrane Collaboration) actually shows that breast screenings are in all likelihood not doing any net good in those who undergo them.
In a 2009 Cochrane Database Systematic Review,** also known as the Gøtzsche and Nielsen's Cochrane Review, titled "Screening for breast cancer with mammography," the authors revealed the tenuous statistical justifications for mass breast screenings:
Screening led to 30% overdiagnosis and overtreatment, or an absolute risk increase of 0.5%. This means that for every 2000 women invited for screening throughout 10 years, one will have her life prolonged and 10 healthy women, who would not have been diagnosed if there had not been screening, will be treated unnecessarily. Furthermore, more than 200 women will experience important psychological distress for many months because of false positive findings. It is thus not clear whether screening does more good than harm.
In this review, the basis for estimating unnecessary treatment was the 35% increased risk of surgery among women who underwent screenings. Many of the surgeries, in fact, were the result of women being diagnosed with ductal carcinoma in situ (DCIS), a "cancer" that would not exists as a clinically relevant entity were it not for the fact that it is detectable through x-ray mammography. DCIS, in the vast majority of cases, has no palpable lesion or symptoms, and some experts believe it should be completely reclassified as a non-cancerous condition. | <urn:uuid:48b6b562-f207-4a86-8df1-eb15ffcb821c> | CC-MAIN-2016-07 | http://www.greenmedinfo.com/blog/how-x-ray-mammography-accelerating-epidemic-cancer | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146550.16/warc/CC-MAIN-20160205193906-00335-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.96113 | 710 | 2.796875 | 3 | {
"raw_score": 3.0481550693511963,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
Does your online course respect and encourage diversity? Here are some facts and questions to consider in order to maximize your course inclusivity.
Current State of Technology
Each year our society’s growing dependence on technology has become more and more apparent within our everyday lives. The surge of this dependency is reinforced by the numbers collected in the 2015 Global Digital Snapshot, which states that there are currently over 7.3 billion people in the world today. Approximately 43 percent of those individuals (over 3.2 billion) are active Internet users, and and over half of the world’s population (over 3.7 billion) own a cell phone.1
This upswing in technology use is not only prominent in Western nations; it has also made great gains in other nations worldwide. Many developing countries, primarily in India and China, have seen record growth over the course of the decade. Approximately 90 percent of the world’s population can access mobile networks, three-quarters of which live in developing nations.2 Having the ability to buy inexpensive computers and smartphones have made it possible to address many issues that have been caused by the global digital divide.3 Individuals living in rural and low-socioeconomic communities now have the capacity to acquire the same knowledge and access to many services as others do through the Internet. As a result, the demand and desire for communities to take advantage of online education has begun to grow.
The first online courses appeared within the Mid-1980s, and for decades after, such courses were concentrated in only a few institutions. Now more than 95 percent of colleges and universities with over 5,000 students offer online classes for credit. By fall of 2012, 4 million undergraduates took at least one course online. To put this number in perspective, more students now take a class online than attend a college with varsity football.4
As the presence of technology continues to grow in Higher Education, it has become more challenging to find the best ways to engage our increasingly diverse student population. This is a result of many factors; one of which is the evolving demographics of what is considered a typical student. Due to the flexibility and the ease of access of online courses there is an evolving shift in the makeup of the student body as a whole. In Clay Shirky’s recent article, The digital revolution in higher education has already happened. No one noticed.,4 he explores how the “non-traditional” student is becoming the “traditional” student in online higher education. These are students who:
- Did not enroll into a university or college immediately after high school
- Are mostly 25 years old or older
- Has dependent children or elders
- Usually are married, or a single parent
- Tends to be enrolled part-time
- Usually works full-time
- Does not live on campus
This also includes understanding that while technology has eliminated many barriers that have in the past prevented individuals from accessing higher education online, there are still large portions of the population who struggle with access to the internet and latest technology. These groups include:
- Students with Disabilities (many times referred to as Special Education or SPED)
- Economically Disadvantaged
- Limited English Proficient (many times referred to as English Language Learners or ELL)
- African American, Hispanic and Native American.
There are three primary reasons these groups have a slower rate of technology adoption: inconsistent internet access, the high cost of the latest technologies or operating systems used in digital devices, and insufficient access to free or cheap applications of online tools.
5 Strategies to Consider During Online Course Development
There are many factors to consider when developing an online course. One of those factors includes becoming informed of the cultural diversity among students and finding unique ways to include their culture into the course. Here are five strategies to consider to help foster inclusivity and showcase cultural awareness.
#1 – Know Your Student Audience
Having a good connection and understanding of your course’s student population is essential to moving toward becoming more culturally aware. Here are a few suggested areas to assess prior or during the first week of a course:
- The number of international students enrolled in the course.
- Reasons why students decided to enroll in the course/program.
- If there are any bilingual or multilingual students are enrolled the course.
- The different student cultures and backgrounds in effort to learn more.
#2 – Review Course Activities for Cultural Awareness and Sensitivity
Awareness and knowledge is key to the development of course activities that recognize the cultural backgrounds of your students. With recognition and encouragement, diversity will lead to a more harmonious online course environment.
#3 – Tap Into Your Student’s Backgrounds, Cultures, and Experiences
When students tap into their past experiences, it allows them to make deeper connections to the course curriculum. In order for those opportunities to be present, barriers in the form of misconstrued perceptions must first be addressed. Most often is the misconception that our differences are what draw us apart. In fact, we find that those same differences are not that different at all. There are many common connections among student cultures and diverse backgrounds, and these connections can bring them closer together.
#4 – Incorporate Tools to Help Bridge the Cultural and Socioeconomic Gap
There are a variety of available technologies that can help bridge the cultural and socioeconomic gaps faced by many online students. Through the use of apps, multimedia, and collaborative tools, students have the opportunity to participate in technology-rich course activities without having to purchase expensive software or make costly hardware upgrades to their machines. Examples of some of these tools can be found within the TeachOnline articles Third Party Tools used in ASU Online Courses5 and Technology Tools Currently Integrated in Blackboard.6
#5 – Strive to Create a Safe, Trustworthy, & Positive Rapport
The absence of a safe and harmonious course environment often can result in the hindrance of a student’s potential to dive deep into the course content, as well as their desire to make connections with the course community. Establishing parameters and encouraging cultural awareness of diverse backgrounds at the start of the course can help to delineate any negative or unwarranted behaviors among students in the course.
In order to successfully establish an inclusive and culturally aware online course environment, the expectation must first start at the beginning of the course. The Critical Multicultural Pavilion EdChange project by Paul C. Gorski provides a list of excellent Awareness Activities7 that range from strategies and preparation to icebreakers. Below is an example of one of these activities that works especially well within an online course.
Exchanging Stories — Names
- Step 1 – Briefly write about the story of your name (i.e. Who gave you your name & why? meaning, origin, nicknames. Reason for being named? Any interesting story about how you were named.)
- Step 2 – Share your story as an introductory discussion board post among the course community.
- Step 3 – Comment on another student’s post (i.e. What you liked about the post, any connections you would like to share)
- Step 4 – Reflect on the activity and experience (How did it feel to share your story? Why was this activity important? What did you learn?)
Technology allows for a more diverse and all-encompassing student body. Therefore, it is important as educators to be sensitive to technological and cultural gaps. It is important to be cognizant of barriers that diverse populations are faced with and for instructors to strive to find ways to work with them. We hope the resources provided in this article will help others to create a more inclusive and connected online course.
Co-written by Monique Jones and Obi Sneed
1 Global Digital Statshot: August 2015. (n.d.). Retrieved December 30, 2015, from http://wearesocial.com/uk/special-reports/global-statshot-august-2015
2 The Role of Science and Technology in the Developing World in the 21st Century. (n.d.). Retrieved December 30, 2015, from http://ieet.org/index.php/IEET/more/chetty20121003
3 Digital Divide – ICT Information Communications Technology. (n.d.). Retrieved December 30, 2015, from http://www.internetworldstats.com/links10.htm
4 Shirky, C. (2015, November 6). The digital revolution in higher education has already happened. No one noticed. (2015, November 6). Retrieved December 30, 2015, from https://medium.com/@cshirky/the-digital-revolution-in-higher-education-has-already-happened-no-one-noticed-78ec0fec16c7
5 Hobson, S. (2014, March 28). Third-Party Tools Used in ASU Online Courses – TeachOnline. Retrieved December 30, 2015, from https://teachonline.asu.edu/2014/03/third-party-tools-used-in-asu-online-courses/
6 Savvides, P. (2015, November 16). Technology Tools Currently Integrated with Blackboard. Retrieved December 30, 2015, from http://teachonline.asu.edu/2015/11/technology-tools-currently-integrated-with-blackboard/
7 C. Gorski, P. (n.d.). EdChange – Multicultural, Anti-bias, & Diversity Activities & Exercises. Retrieved December 30, 2015, from http://www.edchange.org/multicultural/activityarch.html | <urn:uuid:d0f3700f-34c8-4f8c-b8e4-d2d4557efc03> | CC-MAIN-2023-06 | https://teachonline.asu.edu/2016/01/fostering-inclusive-environment-developing-online-courses/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00738.warc.gz | en | 0.937257 | 1,998 | 2.578125 | 3 | {
"raw_score": 2.801119089126587,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Education & Jobs |
An Introduction to the New Testament by Richard Heard
Richard Heard, M.A., M.B.E., M.C., was a Fellow of Peterhouse, Cambridge and University lecturer in Divinity at Cambridge (1950). Published by Harper & Brothers, New York, 1950. This material prepared for Religion-Online by Ted & Winnie Brock.
Chapter 1: The Critical Study of the New Testament
Christians agree in regarding the books of the New Testament as possessing a special authority. They differ as to the nature of this authority and in their interpretation of the contents of the books. The purpose of the critical study of the New Testament, if it is also religious, is to use all the available methods of applying human knowledge to discover how the authority of the New Testament is to be understood, and to set the revelation which it contains as far as possible in its original historical context. The central fact that God has revealed himself to men through Jesus Christ is in the last resort based for Christians on faith and experience and not on knowledge alone. It can be accepted or rejected, but, for those who accept it, it becomes as the act of God no longer a matter for human argument, but the supreme event of history. The final aim of Christian study of the New Testament is the better understanding of the revelation which it contains, and here the resources of human knowledge can be fitly employed, because the books of the New Testament were written and copied by men who were fallible like ourselves and under the influence of their human environment.
This fallibility becomes evident as soon as we undertake the necessary preliminary examination of the text of the New Testament and of the way in which the books of the New Testament were gathered into one authoritative collection. There are a great many places where the wording of our oldest Greek manuscripts differs, and a considerable number where it is impossible to decide with certainty exactly what the original authors wrote. A study of the history of the early Church reveals disagreements as to the books which should be reckoned as of special authority, and it took centuries of dispute before the final selection received general agreement. The detailed study of the books themselves provides further evidence that this fallibility extended to the authors themselves and to the sources which they used. ‘We have this treasure in earthen vessels’(2 Cor. 4:7).
When once we have come to see that the early disciples did not have perfect memories and that their understanding of Jesus was influenced at many points by the mental and religious background of their time, the purpose of the modern critical and scientific approach to the contents of the New Testament becomes clear. It is to establish as far as possible the historical truth as to what Jesus said and did, how the Church grew and developed, and the historical circumstances in which Christians came to write the books of the New Testament. We cannot, of course, achieve more than a very limited reconstruction of the New Testament events and teaching, and on many important points there will continue to be great disagreement. Yet for all the uncertainties that follow in its train, the critical study of the New Testament provides us with a picture of Christian origins that gives a new focus to certain aspects of Jesus’teaching and the development of the Church and a truer understanding of the mode of God’s revelation than that which derives from a complete and uncritical acceptance of the New Testament as uniformly and verbally inspired.
The Progress of Criticism
It was only by slow degrees over a period of centuries that the Church settled which books were to be included in the New Testament and given a place side by side with the Old Testament. The ancient Church contained some acute and learned scholars who raised many of the critical questions that are discussed to-day. Thus Irenaeus at the close of the second century noted the different numbers given to the Beast in his texts of Rev. 23:8, and preferred the reading ‘666’, as modern scholars do, on the ground that it was contained in the oldest copies known to him. In the third century Origen expressed doubts as to the Pauline authorship of the Epistle to the Hebrews on the grounds of the epistle’s style and thought, and Dionysius of Alexandria on similar grounds distinguished between the author of the fourth gospel and the author of Revelation. Such instances of critical acumen could be multiplied, but for the most part members of the Church lacked a scholarly knowledge of Greek and by the end of the fourth century the text of the chosen books was received unquestioningly as of apostolic authority; a series of revisions produced an ‘official’Greek text which was to remain of great influence from the fifth to the nineteenth century, but which we now see in the light of further knowledge to have been based on wrong principles.
The attribution to this New Testament text -- as to that of the Old Testament -- of verbal inerrancy was associated with methods of exegesis which often disregarded the literal meaning of a passage for an allegorical interpretation which gave it a meaning of more present significance. Such methods had been employed by the New Testament writers themselves in their interpretation of Old Testament passages (e.g. I Cor.10 1-2, Heb. 7: 1-17) and for the same reason, the desire to gain the authority of infallible scripture for purposes of controversy or instruction; they could only be justified when the original meaning of the passage had been taken into account, and even in the New Testament this had often not been done. In the later Church this type of exegesis sometimes led to fantastic misinterpretations, e.g. the view held by both Origen and Jerome that Peter and Paul had only pretended to quarrel at Antioch (Gal. 2:11 ff.). Even when the New Testament was literally interpreted, the conception of the equal authority of all passages in it led to distorted ideas of what was the teaching of Christ: the literal interpretation of the Revelation, for example, with its material and temporal picture of Christ’s reign (Rev. 20) has sometimes obscured the spiritual nature of Jesus’teaching on the kingdom.
The effect of such a mechanical doctrine of inspiration and of such inadequate methods of interpretation was to rob the New Testament of much of its true force and to make it the handmaid of ecclesiastical tradition for more than a thousand years. The Reformation saw the reemergence of some true principles of criticism, but they were only slowly to influence the now widespread reading of the New Testament. Thus Erasmus’publication in 1516 of a Greek text based on the comparison of manuscripts marked the beginning of a new era in the determination of the correct text, but progress in the examination and classification of manuscripts was slow, and three centuries were to pass before the textual criticism of the New Testament was firmly based on scientific principles. Luther himself distinguished between the value of different parts of the New Testament (p. 18), and Calvin declared it as ‘the first business of an interpreter, to let his author say what he does say, instead of attributing to him what we think he ought to say’, but neither reformer fully lived up to his own precepts, and it was only gradually that scholars began to adopt a truly historical approach to the documents of the New Testament.
There is no one moment at which ‘modern’methods of criticism can be said to have come into existence, but the first half of the nineteenth century saw their adoption on a wide scale in the universities of Germany. The rationalism of the eighteenth century had led to the widespread abandonment of belief in the infallibility of the Bible and to the rejection e.g. of the miraculous elements of the Old Testament narratives. The application of scientific methods of source-criticism and textual criticism to the writings of Greek and Latin authors had also begun. When men trained in such scientific methods and dominated by philosophical preconceptions which left no room for the miraculous in human life turned to the study of the New Testament, they started a revolution in New Testament criticism. The philosophical bases of thought changed, and are still changing, and the Lives of Jesus and Histories of the Early Church which were written under their influence have each yielded place in turn to a new interpretation, but in the process of controversy the documents of the New Testament have been subjected to such a continuous and minute scrutiny that their scientific study is now established on firm and stable foundations. Perhaps the most important pioneer in the early nineteenth century was the great scholar Lachmann who applied to the New Testament methods which he had learnt from his study of the Classics. It was he who in 1830 laid the foundations of modern textual criticism of the New Testament by rejecting the authority of the traditional ‘textus receptus’(p. 21) in favour of the witness of the oldest Greek and Latin manuscripts, and his declaration in 1835 that the gospels of Matthew and Luke presuppose the Marcan order of the gospel-narrative pointed the way to what is now the accepted basis of any comparative study of the gospels.
The priority of Mark, however, was not generally acknowledged for many years, and only when every possible explanation of the similarities between the gospels, e.g. that Matthew used Luke, that Luke used Matthew, that a common oral tradition alone accounts for the similarities, had been put forward and examined in great detail, did it become finally clear that Mark was the earliest of our gospels. In the process of controversy that led to this conclusion it became widely recognised that a second document, largely composed of sayings of Jesus, was also used by Matthew and Luke, although controversies still continue as to the nature and extent of this source, which is normally designated as Q (from the German Quelle = source).
When once Mark had been acknowledged as the ‘foundation’gospel, the implications of such a belief were seen to be important. The Matthaean authorship of the first gospel was no longer defended by the majority of critics, and both the apostolic authorship and the historical value of the fourth gospel were matters of dispute. On the other hand there was widespread agreement at the end of the century that Mark provided a generally trustworthy account of the ministry of Jesus, although in the prevailing liberal temper of the time critics tended to question the historicity of the miracles recorded in the gospel and also to disregard the apocalyptic nature of e.g. Mk. 13. The authority of Mark was claimed in support of the view that Jesus was first and foremost a great human ethical teacher, whose teaching had been altered by the early Church, and especially by Paul, into a system of theological and sacramental belief.
The problem of reconciling a merely human view of Jesus with the emergence of the Catholic Church was, of course, much older, and the theories of the Tübingen school of critics, which had first been put forward in the eighteen-thirties by F. C. Baur, exercised a wide influence on men’s conceptions of the early history of the Church for most of the nineteenth century and spread a distorted view of the circumstances in which Acts and the epistles were written. Under the influence of the philosopher Hegel’s theory that history proceeds by thesis, antithesis, and synthesis, Baur and his followers proclaimed that the early Church was rent asunder by conflict between Jewish (Petrine) and Gentile (Pauline) factions, and that Acts represented an attempt of later Catholicism to veil these differences. To support these views Baur denied the Lucan authorship of Acts, whose historical value he impugned, and left to Paul the authorship only of Romans, I and II Corinthians, and Galatians; the other ‘Pauline’epistles were products of the Christian struggle against Gnosticism. It was fifty years before the traditional authorship of Acts and of most of Paul’s epistles were again re-established in the favour of the leading German scholars.
The nineteenth century was above all a period in which new knowledge was gained, systematised, and made available for effective use. In the textual field thousands of manuscripts were examined, collated, and classified, and it was the new availability of adequate material that made possible the establishment of the New Testament text on scientific principles (p. 22). Archaeological finds threw new light on the accuracy of many of the details in Acts, e.g. the Asiarchs of 29 31 and the ‘chief man’ of Malta xxviii 7, and papyri dug up in Egypt helped to elucidate the language of the New Testament. The knowledge of the New Testament background was immensely increased both by archaeological discoveries and by the scientific assessment of new sources of evidence. The effect of the accumulation of this knowledge was to make possible a much fuller understanding of the New Testament writers as men of their own time; there is hardly a verse in the New Testament where the application of this knowledge does not bring out some new aspect of the original meaning.
The early years of the twentieth century saw the rise of two new schools of thought which have each made a permanent contribution to the understanding of Jesus and the early Church, although not in the form in which it was originally made.
The ‘eschatological’interpretation of Jesus was a protest against the liberal misinterpretation of him as primarily an ethical teacher. In the last years of the century J. Weiss had shown that such a picture of Jesus was incompatible with the presentation of him in Mark as proclaiming the imminence of the Day of Judgment and the setting up of the Kingdom of God. Weiss, and after him A. Schweitzer in a book, The Quest of the Historical Jesus, which made a great impression on English scholars, interpreted Jesus as primarily a prophet of the approaching world-catastrophe who stood in the succession of Jewish apocalyptists. Such a theory, however, has proved too one-sided for acceptance as a satisfactory explanation of Jesus’life, although it has brought out the undoubted apocalyptic element in the gospels and has forced all subsequent critics to offer an explanation of it.
Of even greater influence has been the ‘sceptical ‘approach to the gospels of a succession of German critics. The ‘Christ-myth’ theory that Jesus never existed (Cf. A. Drews, The Christ-Myth; J. M. Robertson, Pagan Christs.) was an aberration of thought that could never be taken seriously, but the view that we can know very little about him because the gospels are the creation of the Christian community has received unexpected support in the last fifty years. The starting-point of the movement was the publication by Wrede in 19O1 of a book (Das Messiasgeheimnis in den Evangelien (=The Messianic Secret in the Gospels). Significantly enough the book has not appeared in an English translation.) in which he challenged the genuineness of the Marcan outline of Jesus’ministry. The book made little stir at the time, but was to have great influence, especially upon the advocates of ‘Form Criticism’, a new method of gospel criticism that arose in the years succeeding the first World War.
The Form critics treated the gospels as ‘Folk literature’compiled out of the beliefs of a community, and broke down the gospel material into separate incidents and pieces of teaching which had had a separate existence before being collected together and ultimately formed into a gospel. They drew on parallel folk traditions to show that such isolated stories obey certain laws of development, and that they often lose their original point in the telling. The result of the application of such principles of criticism to Mark by sceptical scholars was to change ‘the memoirs of Peter’into an anonymous compilation of material, the historical value of which could not be determined with any certainty. The methods of form criticism have a certain value, and the employment of them opens up new possibilities of understanding how the gospels were composed, but the majority of critics today would separate the employment of such methods from the adoption of the sceptical standpoint which used them to such a negative effect.
For all those who hold that the early Christians misunderstood Jesus there arises the necessity of accounting for the misunderstanding and for the development of the earliest community into the Church as we know it in the second century. The breakdown of the Tübingen theory was followed by the development of other theories which attempted to solve the same problem without disregarding so much of the evidence of Acts and the epistles. Attempts were made to show that Paul was responsible for the transformation of a simple Jewish cult in which Jesus was thought of as Messiah into a Hellenistic mystery-religion, and some scholars tried to push the Hellenisation of Christianity even farther back and to associate it with the introduction of title ‘Lord’ (Greek, kyrios) for Jesus in early Syrian-Christian circles. Against such theories the eschatological school maintained the essentially Jewish nature of Paul’s teaching and held that his conceptions, e.g. of baptism and the eucharist, were based on eschatological expectation and not on any ‘magical’regard for them; the Hellenisation of Christianity was due not to Paul but to his Gentile converts.
This sketch of the development of ‘tendencies’ in modern New Testament criticism has been confined for the most part to work done by German scholars. This is not accidental, for the Germans have been the outstanding pioneers, not only in the production of new theories about the New Testament, but in the accumulation of knowledge. Yet it would be wrong to ascribe too much importance to the emergence of ‘new schools of thought’in the progress of New Testament studies. Such developments have played a useful and valuable part in increasing our understanding of the New Testament, but even more valuable has been the patient sifting of each new theory as it has appeared, the elimination of what is unsound, and the retention for permanent profit of what has proved to be of worth when tested by the New Testament documents themselves.
It is in this field that scholars in England, America and elsewhere, as well as in Germany, have made their most important contributions. The progress of criticism in England, for example, has not been by violent swings of opinion but by gradual steps, in which the conception of a verbally inerrant New Testament has yielded slowly but surely to that of a collection of books, imperfect in all kinds of ways, but containing very much that is historically trustworthy and offering still a sure witness to the truth of the revelation which it contains.
The present position of New Testament criticism cannot be easily defined, although the later chapters of this book attempt to summarise some of the more generally accepted views, and to indicate the main issues of present controversy. There are many important points on which critical opinion is likely to continue divided, but there are good grounds for thinking that we can still get from the New Testament a knowledge of Jesus and of his Church different in some respects from that of earlier days but with the same power to inspire men to follow him in their lives.
Books For Reading:
W. F. Howard. The Romance of New Testament Scholarship (Epworth Press).
M. Jones. The New Testament in the Twentieth Century (Macmillan).
A. Schweltzer. The Quest of the Historical Jesus (Black).
A. Schweitzer. Paul and His Interpreters (Black).
M. J. Lagrange. The Meaning of Christianity according to Luther and his Followers (Longmans).
S. L. Caiger. Archaeology and the New Testament (Cassell).
R. M. Grant. The Bible in the Church (Macmillan).
Viewed 587527 times. | <urn:uuid:8bd8abca-9474-4f56-b4ef-2ed3fa2c84c7> | CC-MAIN-2016-44 | http://www.religion-online.org/showchapter.asp?title=531&C=545 | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718309.70/warc/CC-MAIN-20161020183838-00190-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.970357 | 4,064 | 3.390625 | 3 | {
"raw_score": 3.0098555088043213,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Religion |
Stretching before and after exercise is always a confusing component in a workout regimen. When improving our bodies, we always work to become stronger, leaner and curvier. This plan often leaves out one of the key components in overall physical fitness: flexibility. Poor flexibility leads to a limited range of motion, which often leads to inefficient workouts and possible injuries. However, knowing when and how to stretch before and after a workout is just as important as stretching.
When to Perform Stretches
Myth: Perform stretches before a warm up.
Fact: Though stretching should be part of warming up, the muscles must be warmed before flexibility exercises. Perform light cardiovascular exercise for 5 to 10 minutes before stretching to prepare your muscles for activity.
The best type of stretching to perform before your workout is dynamic stretching. This is stretching that involves movement of the muscles. Dynamic stretching includes arm circles, weightless walking lunges, side bends and trunk rotations.
Stretching to Prevent Injury
Myth: Static stretching prevents injury.
Fact: Performing static stretches before a workout will not properly prepare your muscles for activity. Static stretching should only be performed after exercise to prepare your muscles for recovery.
In order to reduce the risk of injury during exercise, you must properly warm up with a light jog or brisk walk followed by dynamic stretches. Dynamic stretching not only increases your range of motion for exercise, but also mentally prepares you for the activity soon to be performed.
Who Benefits from Stretching?
Myth: Flexibility is only necessary in competitive athletes.
Fact: Everyone can benefit from flexibility. Good flexibility improves posture. Depending upon the type of work we perform, our daily activities can cause tightening in areas that affect posture. Poor posture can lead to pain and discomfort, often resulting in time at the chiropractor or doctor's office.
Flexibility also improves performance in the gym. Increased range of motion allows us to perform better by improving our agility and speed. This also translates well for runners. More flexibility in the hip flexors and quads, equals a more efficient stride.
Necessity of Stretching before and after Exercise
Myth: It's only necessary to stretch before a workout.
Fact: There are two important components in flexibility: dynamic flexibility and static flexibility. Dynamic flexibility is our range of motion on a two-planar field. This improves your speed and ability to react to a motion. An example of dynamic flexibility would be kicking or taking off during a sprint.
Static flexibility is our range without considering speed. Static flexibility is measured on a single-planar field to show range of motion against an external force. An example of static flexibility would be how long and how far a hamstring stretch can be held.
Dynamic flexibility is worked on before and during performance, while static flexibility is worked on after performance as a part of a post-workout recovery. Both are equally important to overall flexibility. Therefore, a warm up and cool down should incorporate flexibility training. | <urn:uuid:4562b990-fed1-4ed9-9c37-4cd3eb7eedcf> | CC-MAIN-2015-11 | http://www.3fatchicks.com/stretching-before-and-after-exercise-myths-and-facts/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463103.84/warc/CC-MAIN-20150226074103-00325-ip-10-28-5-156.ec2.internal.warc.gz | en | 0.945935 | 609 | 3.28125 | 3 | {
"raw_score": 1.1112098693847656,
"reasoning_level": 1,
"interpretation": "Basic reasoning"
} | Sports & Fitness |
Electrical engineering assistant professor LaVonda Brown is bringing a human touch to the field of robotics.
Brown, who joined the electrical engineering faculty in January, is researching ways to leverage robots and other health care technologies to improve people’s quality of life. Though Brown has aspirations to be a leader in the field, she said her route to health care robotics wasn’t always obvious.
Brown said as an undergraduate she knew she didn’t fit the stereotypical profile of an electrical engineer. She was outgoing, wanted to work on interdisciplinary teams and wanted to see her work make an impact in people’s lives in real time.
The marriage of her personality and research interests brought her to the realm of human-robot interactions. First, as a Ph.D. candidate, she researched utilizing a Robotis Darwin humanoid robot as an educational tutor for math, and the interactive technology eventually led her to a focus in health care robotics.
Brown said her work allows her to positively impact people’s lives and pay forward the support and assistance she’s received throughout her life.
“It gives me purpose,” Brown said. “It makes me wake up and want to come to work every morning, just knowing that somebody is going to benefit from this and I can improve their quality of life.”
Brown’s research has focused on programming Darwin robots to serve as therapists for children with motor-skills disorders like cerebral palsy. Brown said the robots can be programmed to assist patients in physical therapy exercises by offering instruction, encouragement and performance feedback.
In some cases, robots may be viable alternatives to human therapists, Brown said. The need for assistance for children with motor-skills disorders currently outpaces the number of therapists available. Robots could provide patients with the daily physical therapy assistance they need while alleviating the load for families and therapists alike, she said.
Brown’s research interests also extend beyond robotic platforms.
Brown is currently collaborating with Emory University’s Alzheimer’s Disease Research Center to develop low cost eye-tracking technology that can contribute to the detection of early onset Alzheimer’s Disease. Brown said the longitudinal study is entering its second year and new data is being collected weekly.
Brown said she designed the program as a multi-camera system that monitors eye gaze as subjects interface with a series of images. The cameras track the movement of the subject’s eyes as they scan the screen, and the results are analyzed to determine if trends exist for people with cognitive impairments and disorders, she said.
The assumption is that people with cognitive impairments will scan the images differently when viewing repeated images than someone with good memory. If the study is successful, Brown and her fellow researchers plan to scale the technology down into a mobile version, hopefully an app, that would utilize cell phone cameras to test for Alzheimer’s and similar disorders.
Electrical and computer engineering division chairman Jerry Trahan said Brown’s work adds valuable expertise to the college. Innovation in health care robotics and instrumentation is growing in demand nationally as the need for improvements to health care systems and delivery grows, he said.
Brown’s research and knowledge of robotics builds on the College of Engineering’s growing health care focus. In the classroom, Brown’s background helps her expose students to the more tangible, hands-on perspective of what’s possible in electrical engineering, Trahan said. Having a new take on the field could help motivate and inspire students, he said.
Brown said she hopes to heavily involve students in her lab and research efforts. She’s interested in a diverse range of issues, and having a variety of perspectives helps bring new and valuable ideas to the table. She wants the students to have an equal hand in the research, and for her lab to operate as a team.
In the future, Brown hopes to continue diversifying her research, possibly researching obesity. There’s no set plan now, but Brown said she’s going to follow her passions.
“I want to be involved in things I’m genuinely passionate about. That’s what makes the research successful,” she said. | <urn:uuid:111275b0-f0e8-4a8a-b1fc-577b2245f060> | CC-MAIN-2019-04 | http://www.lsunow.com/daily/electrical-engineering-professor-researches-ways-to-use-robotics-to-improve/article_8482c2dc-14f2-11e7-8241-6f7105de0d4e.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583688396.58/warc/CC-MAIN-20190120001524-20190120023524-00584.warc.gz | en | 0.95936 | 868 | 2.640625 | 3 | {
"raw_score": 2.9204819202423096,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
List of content you will read in this article:
The ping time is defined as the time taken for the ping command to complete the round trip. If a server or a machine is up and running, the IP response will be generated. Hence, you can also use it to scan the range of IP addresses for all those reachable hosts.
However, if you disable the ping command, you will go ghost on the network, and nobody will identify you. This has some potential advantages.
Advantages of Disabling Ping command in Linux Servers
Some of the significant advantages of why someone might want to disable ping on their machines are -
- To make the server on the network more secure.
- To save the server from attacks generated from a compromised server.
- To prevent the ping for an attack to kill the machine.
- To hide the system or server in the network.
Before discussing a few methods to disable ping on your machine in a network, we must understand what IP tables are and what ICMP stands for.
Understanding IP tables
IP tables are firewalls in the command lines that prevent or allow traffic from servers mentioned in the chain of policy. For monitoring the traffic, it uses packets. When a different machine wants to connect with you, the iptable checks the sender's information in the iptables, containing a list of all the servers that the current machine is allowed to communicate with. If the server's IP address that is trying to connect with you is not on the list, it will prevent the connection.
ICMP abbreviates for Internet Control Message Protocol. It's different from TCP. It does not have ports and does not allow the transfer of data for control purposes. It is used to send or receive error messages and verify the connection between other IP addresses.
How to disable ping on Linux Servers?
There are several methods to disable ping on Linux servers, and they vary from one Linus distro to another. However, here we have discussed some methods that can be used in almost any Linux machine.
Using ICMP echo
Using ICMP echo is a useful way to stop responses for a ping from your Linux server.
$ echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all
You need to have sudo privileges to run this command. After running this command, we will try to ping the server to test the command.
We can notice that there are no responses from the ping. Let's try to enable the ping once again. We can use 0 instead of 1 this time.
$ echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_all
You can see that we now have a response to the ping command after we have enabled it again. Please note that this way of disabling the ping command is only temporary and we get reset after you reboot the system. If you want to make the changes permanently, you need to edit the /etc/sysctl.conf configuration file.
net.ipv4.icmp_echo_ignore_all = 1
You can add this line to the file and update the change; you can run this command.
$ sysctl -p
Using IP Tables
One of the best ways to stop ping responses is by using IP tables. Before we step further, we must ensure that IP tables exist in our server or machine. We can verify this by finding out the version of IP tables in the system.
$ iptables --version
IP tables are usually pre-installed in almost all Linux distros. Now, we need to use the following commands first.
$ iptables -A INPUT -p icmp --icmp-type echo-request -j DROP
$ iptables -A OUTPUT -p icmp --icmp-type echo-reply -j DROP
After that, to add the rules to the iptables, we can use this command -
$ iptables -L
Once you are done, let's try to ping the server once again.
We can notice that we did not get a response when we pinged the server.
Although there are tons of benefits and advantages of disabling ping in Linux, you might not be able to perform several valuable things as well. You will be barred from performing diagnostics, sweeping servers over the network, sharing information related to security, etc. When you connect to a specific network for an interactive gaming experience, the quality of pings might hamper your experience.
In this guide, we discussed two important methods that will allow you to disable ping on any Linux server with root privileges, and you will be able to work on the network without exposing yourself. You will be invisible to hackers, hide your presence in the local network, and enjoy your privacy.
We started with a basic understanding of the ping command and how it works by sending packets. We discussed the several advantages that you can enjoy by disabling ping on Linux servers across the network. Moving ahead, we discussed some important concepts such as IP tables and how they work along with the ICMP protocol. Using these concepts, we discussed two different methods to disable ping. By buy linux vps can provide you with a flexible and cost-effective solution for hosting websites, running applications, and managing your online presence, without the hassle and expense of maintaining your own physical server.
We certainly hope that using this detailed and comprehensive guide will allow you to effectively disable ping either temporarily or permanently on your Linux Servers.
People also read: | <urn:uuid:4c38d35d-1523-4f2a-b5a6-0bcafe37de6f> | CC-MAIN-2023-50 | https://1gbits.com/blog/how-to-disable-ping-in-linux-servers/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100527.35/warc/CC-MAIN-20231204083733-20231204113733-00654.warc.gz | en | 0.908237 | 1,152 | 2.921875 | 3 | {
"raw_score": 2.399902105331421,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Software |
This page focuses on the adverse effects of work on health, even though the positive effects of appropriate work on health and well-being are no less important.
Every year about 10 million of the 150 million workers in the European Community are affected by incidents, "accidents" or diseases at work. Direct compensation costs are estimated at 20 billion ECU per year.
In the UK data on medically reported incidence of occupational disease and work-related illhealth is collected from occupational physicians (OPRA), General Practitioners (THOR-GP) and other doctors participating in the THOR network.
According to UK official statistics, every year about 2,000 lives are lost through occupational disease or injury, about 20,000 major industrial injuries occur (e.g. skull fracture, loss of sight) and there are about 200,000 injuries resulting in a work disability of 3 days or more. These figures are gross underestimates of the true incidence of occupational ill-health. Thus for example the "true" figure of occupational cancer deaths alone in the U.K. may be to the order of 5,000 per year. While only about 300 workers receive disablement benefit for industrial dermatitis every year, there may be between 15,000 and 60,000 new cases of this condition every year.
Extrapolation from the UK Labour Force Survey suggest that in a year at least one million people believed they had ill health caused by work and a further million believed they had ill health made worse by work. Explore this further.
Hazard is the potential to cause harm. Risk is
a measure of the likelihoodof a specified harmful effect in specified circumstances.
It is important to distinguish between hazard and risk.
Hazards in the workplace include the following:-
Various aspects of work organisation may be stressors.
|The responsibilities of the employer mainly stem from legislation
such as the Health and Safety at Work etc. Act (1974) but other more recent
UK and European Union legislation is very important in managing Healthand
Safety at work. These include the Management of Health and Safety atWork
Regulations, Control of Substances Hazardous to Health Regulations,Manual
Handling Operations Regulations, Personal Protective Equipment atWork Regulations,
and various others.
The image on the left shows a worker, protected from a chemical exposure
contained within a reaction vessel, provided with local exhaust ventilationat
the orifice of the vessel, designed so as to suck away any gases orvapours
as they emanate from the vessel. In addition, he is wearing personal protective
equipment consisting of an airhood supplied by piped breathing air, as
well as rubber gloves, safety shoes and other skin protection.In his case
there are therefore several lines of defence to protect him from exposure.
Assessment Of Health Risks Created By Work
Prevention or Control of Risks
Monitoring and Evaluation
Consultation, Information, Imstruction & TrainingSadly, not all information or instruction is useful and appropriate. Information alone is not a substitute for reasonable risk reduction strategies. Consider the image below, for example.
Would you consider that advising workers not to inhale blue asbestos is a reasonable way of protecting their health, and preventing ill-health?
Incidentally - this sign was attached to the boilerhouse of a National Health Service hospital in Britain, and the photo was taken in the early1980's. With attitudes such as those illustrated by the photo, it is no surprise that hundreds of workers are sadly still dying every year from mesothelioma caused by occupational exposure to asbestos several years previously, and the number is set to continue rising, before it eventually falls.
Record Keeping, and ReportingThese are requirements of several regulations, and are essential meansof assessing the adequacy of risk reduction measures and of identifying previously unrecognised hazards.
history is often very important
in identifying relevant exposures and linking them
to ill-health. The concept of "cumulative exposure" i.e. a quantitative
measure of the intensity of exposure and the duration of exposure is important,
since generally itis the main determinant of risk. Health may be harmed
by occupational exposures in many different ways, and practically any organ
system can be affected.
Some examples follow - (starting with the lungs and skin, the organs of first contact for most chemical occupational exposures):-
Genitourinary and endocrine
In the Great Britain the Employment Medical Advisory Service of the Health and Safety Executive employs medical doctors who should be available to advise workers or their general practitioners.
Some National Health Service Trusts also offer this facility to patients referred by their general practitioners:
A separate page provides more information about the control of risks to health from work. | <urn:uuid:bfc41ba0-8629-4d0e-a568-2fdcdfbe66c2> | CC-MAIN-2014-42 | http://www.agius.com/hew/resource/workenv.htm | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067496.61/warc/CC-MAIN-20141017150107-00089-ip-10-16-133-185.ec2.internal.warc.gz | en | 0.943112 | 971 | 2.765625 | 3 | {
"raw_score": 2.835977554321289,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
A circular saw is a hand-held electric saw that uses a flat circular blade to cut wood, metal, or plastic. Different circular blades are used depending on which material needs to be cut. It is designed for ripping and crosscutting materials that might be too large for a table saw. A circular saw is used for straight-line cutting only. The handle of the saw has an on/off switch and an arbor nut to hold the blade in place. This saw also has guards to protect the user from touching the blade while in use. These types of saws also have height/depth and bevel adjustments.
Why they are worth having
This is an extremely important tool to have if you’re working with large amounts of wood, metal of plastic that a simple table saw could not be used for. The circular saw is normally used for heavy work where stronger and thicker materials are being used. It is important, when using a Circular saw, to first inspect the wood (or other material being used) and remove all nails and screws before cutting. Serious accidents may occur if the rotating blade of a circular saw hits a steel screw or nail.
What you can use them for
The circular saw, as mentioned above, can be used for projects that require ripping and cross cutting, bevel cutting and plunge cuts. When ripping (or cutting), the material should be supported on a large table or on the floor, and several pieces of scrap wood should be placed underneath the wood for additional support. This scrap material should be at least 1″ thick in order to ensure that the saw’s blade does not cut through the surface below. For more narrow cuts/rips, first consult the rip guide that usually comes with your circular saw. If this guide is not included, it can be purchased separately. For wider cuts (such as plywood) clamp down a long metal rule or a straight piece of wood as a guide. It’s important that before you begin cutting, you adjust the saw’s depth of cut so the blade doesn’t cut through to the scrap wood or floor that you are using as support. Also ensure that the power cord is free and clear of the cut area so that doesn’t cause issues.
Bevel cuts (cuts that are not at right angles) are performed in the same way as crosscuts and rips. Again, make sure that the blade does not come into contact with the underlying supports or the scrap wood. Plunge cuts are when you begin the cut in the middle of the work, rather than at the edge. One example of a plunge cut is when you need to put a skylight into the middle a roof, rather than starting at the edge. First, mark the areas that you want to cut out, and then begin near the corner of one side and place the front edge of the saw base firmly on the wood. Lift up the saw guard using the correct procedure, and make sure that you have adjusted the depth of cut to a suitable depth. Make sure you consult the user’s manual so you are well aware of how to best use the circular saw. | <urn:uuid:bb630e38-39c5-481b-b12d-abf187a3cf88> | CC-MAIN-2019-22 | https://www.gyermeklanc-foto.com/benefits-and-uses-of-circular-saw/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257601.8/warc/CC-MAIN-20190524084432-20190524110432-00330.warc.gz | en | 0.948773 | 644 | 3.421875 | 3 | {
"raw_score": 1.5825289487838745,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Home & Hobbies |
What is considered a yearling deer?
A yearling, on the other hand, is a deer experiencing its second year of life and will be 12 to 24 months old. Some hunters claim that if a deer has lost its spots, it’s no longer a fawn, but that is not true. Regardless of the spots, if a deer is experiencing its first hunting season, it’s a fawn.
How do you tell if a deer is a yearling?
Yearling deer are the easiest to identify. They are born just a few months before the opening of archery season. Although their spots have faded, they still appear different than the rest of the herd. Yearlings often stand near their mother, but it’s important to know how to spot them when they’re alone.
How do you tell the difference between a mature doe and a yearling?
The mature doe has the long shape of a large suitcase, while the young deer will resemble a square box or briefcase. Fawns and juvenile deer will have short snouts, whereas an adult has an elongated nose. Adults will also have darker tarsal, compared to no staining on yearlings.
Is it OK to shoot yearling deer?
“Kill yearlings” isn’t a message you expect to hear from the Quality Deer Management Association. Many viewers were surprised both of the skulls were from 1 1/2-year-old deer. “The biggest difference is, obviously, the number of points,” Adams said.
How big is a yearling whitetail deer?
Yearling bucks, which range from small spikes to basket-racked 10-pointers, typically weigh 105 to 125 pounds. Northern does weigh 105 to 120 pounds field dressed. For decades, some hunters have relied on chest-girth charts to estimate live weights of deer.
How do you tell if a baby deer is a male or female?
The only way to tell the sex of a fawn is to inspect between its legs where the important parts are – just like the doctor did when you were born. In fact, it is impossible to distinguish the sex of newborns of most any species unless you physically examine them.
Is it OK to shoot a doe with fawns?
Although the vast majority of fawns are 100-percent weaned, some does will still let their fawns nurse well into the hunting season. There is absolutely nothing wrong with shooting that doe, because remember, her fawns are already weaned.
Is a yearling a fawn?
As nouns the difference between fawn and yearling is that fawn is a young deer while yearling is an animal that is between one and two years old.
How do fawns eat?
The mother deer or doe may nurse her fawn three to four times a day. Deer milk is very rich. Once the fawn is old enough, it eats the same food as its mother… plants, including leaves, twigs, fruits and nuts, grass, corn, alfalfa, and even lichens and other fungi.
How long is a whitetail deer pregnant for?
White-tailed deer/Gestation period | <urn:uuid:ac5abb6c-6b2c-4524-85e1-732052eab46b> | CC-MAIN-2024-10 | https://greed-head.com/what-is-considered-a-yearling-deer/ | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473824.13/warc/CC-MAIN-20240222161802-20240222191802-00446.warc.gz | en | 0.975294 | 679 | 2.6875 | 3 | {
"raw_score": 1.6466482877731323,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Sports & Fitness |
Civil War and Civil Language: Word Choice and the Newsroom
Today's debate is on the use of "civil war" to describe the
struggles in Iraq. News organizations such as the Los Angeles
Times and NBC News have begun to include that phrase in their reporting. Other news organizations remain in a holding
pattern with terms such as "sectarian violence." Tony Snow, speaking for the Bush
administration, insists that "civil war" overstates and mischaracterizes the
nature of the violence on the ground.
So what is a responsible editor to do?
The answer will be easier when we realize that the responsible
choice of words is one of the most important and common challenges in American
politics and journalism. Consider these
- pro-choice vs. pro-life vs. pro-abortion vs. anti-abortion
- illegal alien vs. illegal immigrant vs. undocumented worker
- refugee vs. evacuee
- invasion vs. incursion vs. police action
- prisoner of war vs. enemy combatant
- Islamo-fascist vs. jihadist vs. terrorist vs. Muslim fanatic
vs. Iraqi insurgent
The weight of these choices falls heavily upon the
journalist, as it should. For in
politics, each term carries ideological meaning, even as it appears to the
world in the sheep's clothing of impartiality.
My terrorist, as they say, is your freedom fighter.
In politics, each term carries ideological meaning, even as it appears to the world in the sheep's clothing of impartiality.George Orwell argues that political abuse and language abuse
together form the double helix of government corruption and tyranny. "In our time," wrote Orwell after World War
II, "political speech and writing are largely the defense of the
indefensible. Things like the
continuance of British rule in India,
the Russian purges and deportations, the dropping of the atom bombs on Japan,
can indeed be defended, but only by arguments which are too brutal for most
people to face, and which do not square with the professed aims of political
The corrupt create language, argued Orwell, that softens or
veils the truth through euphemism or abstraction: "Defenseless villages are bombarded from the
air, the inhabitants driven out in the countryside, the cattle machine-gunned,
the huts set on fire with incendiary bullets:
this is called pacification.... Such
phraseology is needed if one wants to name things without calling up mental
pictures of them."
The political language of our own time has mutated a bit from
what Orwell read and heard. Today, the
debate is framed by simple phrases, repeated so often to stay "on message,"
that they turn into slogans, another substitute for critical thinking. So one side wants to "stay the course"
without settling for the "status quo," and condemns political opponents who want
to "cut and run."
It is one job of the journalist to avoid the trap of
repeating catch phrases, such as "the war on terror," disguised as arguments, and to help the public navigate
the great distances between "stay the course" and "cut and run." Surely, they are not the only options.
It is one job of the journalist to avoid the trap of repeating catch phrases ... and to help the public navigate the great distances between "stay the course" and "cut and run."Which gets us back to "civil war." The phrase itself is odd, almost an oxymoron. All other denotations and connotations of "civil" are positive, the antithesis of war.
We long for "civility" in speech and behavior, which is a sign of a "civilization."
The phrase is almost ancient. One early use in English,
dated 1387, describes the "battle civil" between two Roman factions. Shakespeare uses the word "civil" at the
opening of "Romeo and Juliet" to
describe the violence between the Capulets and the Montagues. And the exact phrase "civil war" appears in
1649 to describe the struggle between the British Parliament and King Charles I.
We should also remember that the American Civil War was once
called the "War Between the States," which seems neutral when compared with the
contentious language of North and South that created the "War of the Rebellion"
versus the "War of Secession." We should
remember that many terms we take for granted were applied retrospectively by
historians or other experts. (I
explained to someone just today how I -- ignorant of the term "The Great War" --
always marveled at the prescience of those who named World War I, knowing that a
Second World War was sure to come along.)
But as long as we journalists remain scribblers of the first
rough draft of history, we learn to settle.
Our job is to find language that describes the world accurately but in a
non-partisan and -- as my young friend Pat Walters
reminds me -- efficient way.
Our job is to find language that describes the world accurately but in a non-partisan and ... efficient way.So, while "illegal alien" turns people into criminal
Martians, so "undocumented workers" seeks to veil their illegal status. Which leads many journalists to "illegal
immigrants," a compromise that seems clear, efficient, and, from my limited
perspective, non-partisan. Others will
and should disagree.
Which leads me to this conclusion:
Journalists should avoid the widespread and unreflective use
of the term "civil war." To use it is to
play into the hands of those who would de-certify the press by framing us as
against our government and American interests abroad. More important, "civil war" is too vague an
abstraction to describe all that is happening on the ground in Iraq. The violence comes from Americans, from
civilians, from militia, from various Muslim sects (against foreigners and each other),
from mercenaries, from criminal gangs, from foreign jihadists. It is less the job of the foreign
correspondent to summarize information in abstract language than to report in
concrete and specific terms on what is happening.
The reporters in Iraq
are, to my mind, men and women of great physical and moral courage, performing
one of democracy's precious duties -- to observe the war as closely as possible -- and to report it back to those of us who claim to govern ourselves. If those observations conflict with
government claims, so be it. We'll argue
the definitions back home, and the news media here can cover that, too. | <urn:uuid:3f1c15a6-cae0-4c33-ac4b-65a2e978c1e0> | CC-MAIN-2016-07 | http://www.poynter.org/2006/civil-war-and-civil-language-word-choice-and-the-newsroom/79531/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154682.35/warc/CC-MAIN-20160205193914-00013-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.947243 | 1,393 | 2.578125 | 3 | {
"raw_score": 2.9301412105560303,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Politics |
8.1.2 Component Removal, Through Hole Components, Solder Fountain Method
This procedure covers the general guidelines for through hole component removal using a solder fountain system.
There is basically only one style of through hole component. Whether there are a few leads or many, or whether the component is large or small, the component removal principles using this method are the same.
Caution - Operator Safety
This process uses molten solder and exposes the untrained operator to serious hazards. A thorough review of the equipment manual and comprehensive training are mandatory. Daily maintenance is essential. Consult the equipment manual for more information.
Caution - Component Sensitivity
This method may subject the component to extreme temperatures. Evaluate the component's tolerance to heat prior to using this method.
Caution - Circuit Board Sensitivity
Circuit boards are made from a great variety of materials. When subjected to the high temperatures of the molten solder used in this method they are susceptible to the following types of damage:
1. Layer delamination.
2. Copper delamination, separation of pads, barrels of inner layers.
3. Burns and solder mask chipping.
Each circuit board must be treated individually and scrutinized carefully for its reaction to heat. If a series of circuit boards are to be reworked, the first several should be fully protected until a reliable procedure is established.
Minimum Skill Level - Expert
Recommended for technicians with advanced soldering and component rework skills and extensive experience in most repair/rework procedures.
Conformance Level - Medium
This procedure may have some variance with the physical character of the original and most likely varies with some of the functional, environmental and serviceability factors.
Precision microscope with stand and lighting for work and inspection.
General purpose oven for drying, baking and curing epoxies.
Images and Figures
Through Hole Component
Figure 1: Typical solder fountain system.
Solder Fountain System
Most solder fountain systems have the same basic components. A solder pump and solder reservoir, various nozzle sizes and controls for solder flow height.
Solder from the reservoir is driven up through the nozzle by the pump. Nozzles are made of steel with welded seams and connections. It is important that the nozzle construction allow for the capture of the pump's inflow and for the runoff of the solder. This prevents the excess solder from splashing and maintains a usable solder level above the nozzle lip.
Occasionally the opening in the solder fountain table needs to be restricted to prevent solder splash from contaminating the un-worked part of the board. Do not close the opening too tight or you may impede the nozzle run off.
Above the solder fountain head there is generally a light projected alignment mark that permits you to center the part to be removed over the nozzle.
Solder Height Adjustment
Solder height should be set at 1.50 mm - 3.00 mm (.060" - .120") above the lip of the nozzle. The ideal situation is to have the leads of a component just immersed and wetted without having the wave exert any upward pressure on the circuit board. The solder fountain table surface should be parallel to the nozzle surface. Components and leads on the bottom side of the circuit board may cause the PC board to be uneven, this condition must be compensated for.
Insufficient immersion will prevent proper heat transfer and reflow. Excess pressure will cause solder to surge up through holes and to spill out onto the top side of the circuit board.
Solder Temperature Adjustment
Solder temperature adjustment varies depending of various factors. Normal setting 260 C (500 F.) During heavy use, solder temperature may cycle between 250 C - 270 C (480 F - 520 F.) The heaters should react quickly to normal drops in temperature. The heaters may overshoot the preset temperature when vigorous activity is suddenly halted. Operators must be alert to temperature fluctuations that exceed preset standards.
Solder Fountain Time Adjustment
This adjustment can be used to precisely control operations of a repetitive nature or in instances where you want to strictly control a circuit board's exposure to the solder fountain heat.
The timer may also be set to maximum and the on/off action of the wave is controlled by the motor's on/off foot pedal or by lifting the board on and off the wave.
There are a variety of removal tools to help extract the component once reflow has been achieved. The extractor tool should provide the operator a good grip but should not unduly damage the component during removal.
PC Board Pre-heat
Recommendations for pre-heat range from 1 to 4 hours at 65 C - 120 C ( 150 F - 250 F.) The requirements of temperature and time for pre-heat depend on the board construction, age and exposure to the atmosphere.
In general terms the pre-heat will serve four purposes.
To drive out volatiles or moisture from the circuit board. Moisture that has penetrated the board may cause expansion or delamination when it is rapidly heated.
To prevent thermal shock to the board. Ambient temperature in buildings in the winter can be as low as 13 C (55 F.) As the circuit board at this temperature comes in contact with molten solder, the extreme shock of the widely varying temperature may cause surface or internal damage.
Pre-heat may permit you to pre expand the circuit board. Some circuit boards expand so severely at the point of high heat that they will bow up or down enough to create difficulties in maintaining proper board profile to the solder wave.
Pre-heat raises the temperature of the circuit board and the component to be removed. This allows for quicker component removal. This reduces the potential for burning of solder mask and the circuit board surface and reduces potential for other thermal damage.
Procedure - Circuit Board Preparation
The area surrounding the component to be removed may need protection. If components or the circuit board surface are susceptible to damage or exposure to solder they may be protected by using the following procedure:
Straighten any leads that may prevent the easy removal of the part.
Apply high temperature tape to any flat surfaces surrounding the rework area to insulate the surface from extreme temperatures. Or apply high temperature flexible mask to protect irregular surfaces. The mask may need baking to provide the proper cure prior to reflow.
Select an extractor tool and check the fit to be sure the component can be grabbed easily.
Procedure - Circuit Board Pre-heat
Circuit boards returned from the field or where they have been exposed to moisture for some time.
Bake for 4 hours at approximately 75 C (165 F.) Prior to part removal the PC board should be pre-heated for one hour prior to removal of the part. If possible perform reflow immediately upon removal of the circuit board from the oven after completion of the baking cycle. If the circuit board must sit between the pre-heat and removal, it may sit for the maximum of one night only in a dry atmosphere.
Top heat during removal is only used when working with the most difficult components. To apply top heat, a heat gun is positioned directly above the solder nozzle at a set distance above the circuit board surface. Top heat is applied for a set time prior to activating the solder fountain. Heat sensitive chalk applied to the component will signal when the proper temperature has been achieved.
Other component temperature indication techniques can be used.
Procedure - Removal Process
Turn on the solder fountain system and allow the solder to reach the proper operating temperature. Clean the machine as needed and test run the pump to be sure there is no buildup of contamination that may cause a drag on the pumping system.
Select the proper nozzle and install it into the solder fountain system. A nozzle that is too large will expose the circuit board surface to unnecessary heat. A nozzle that is too small may not reflow all the component leads.
Check the table height and solder wave height to be sure they are properly set for the circuit board to be worked on.
Apply flux to all the leads of the component to be removed. Apply the flux to both the top and bottom side solder fillets.
Place the circuit board over the nozzle. Check the position using the alignment light.
Activate the solder fountain. Once full solder reflow has been achieved extract the component with the extractor tool. Operator skill and experience are required to prevent hole and pad damage caused by premature removal or from heat damage due to delayed removal.
Immediately drop the solder fountain to prevent over exposure.
Allow the circuit board to cool before handling and inspection.
Clean the area and inspect for signs of damage.
Video above courtesy of Air-Vac Engineering Company | <urn:uuid:4bd19afc-3feb-4e65-852f-84c727470992> | CC-MAIN-2021-49 | https://circuitrework.com/guides/8-1-2.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00497.warc.gz | en | 0.90631 | 1,791 | 2.6875 | 3 | {
"raw_score": 2.3271682262420654,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Hardware |
Many Americans and Canadians travel back and forth across our shared border with ease. Thanks to the friendship between the countries, the crossing is typically very easy. Having personally entered Canada six times by land, air and sea, I can attest to this.
That is until one is convicted of an offense Canada considers a serious crime.
The Differences Between Canadian and United States Criminal Law
While the United States and Canada are culturally more alike than different, there are some major differences in their respective criminal justice systems.
In the United States, national level criminal laws (also known as federal offenses) are those which have been passed by Congress, signed by the President and not been overturned by the Supreme Court as being unconstitutional.
These are typically very serious offenses deemed to be crimes against the United States government’s constitutionally granted authority, jurisdiction and national security powers. Examples include but are not limited to terrorism, racketeering, organized crime, bank fraud, bank robbery, counterfeiting, conspiracy to distribute drugs, certain firearms offenses and mail fraud. Per the constitution, everything else delegated to the individual states.
Canada also maintains a federal system, dividing power between the government of Canada and the governments of its provinces. However, the Canadian constitution grants the Canadian government quite a bit more power and jurisdiction. While the United States government states that anything not mentioned in the constitution falls to the states, the Canadian constitution grants specific powers to the provinces, and anything not mentioned falls to the Canadian government first.
Criminal law, for instance, is a power specifically granted to the Canadian government.
Neither country admits citizens from the other who have been convicted of an equivalent national level (or federal, as we call them in the United States) offense. For instance, someone convicted of drug trafficking in Canada would not be admitted to the United States, and vice versa, as drug trafficking is a federal offense in both countries.
However, here’s where it gets tricky. Driving under the influence in the United States is a crime in every single state. However, it is not a federal crime, as the United States government has no law against it. It is an area of criminal law that has been delegated to the states.
Driving under the influence in Canada, however, is a national crime, falling under the national criminal code.
Therefore, if you are convicted by your state of a DUI first offense, you now have an equivalent national level conviction on your record in the eyes of the Canadian government. On the flip side, if you are convicted of a DUI in Canada, you do not have an equivalent federal offense in the eyes of the United States government.
Indeed, Canadian Immigration and Citizenship (CIC) specifically mentions driving under the influence as making someone criminally inadmissible. Meanwhile, the United States Customs and Border Protection agency specifically mentions that one DUI conviction in Canada does not make one inadmissible to the United States. Only multiple DUI convictions or a DUI conviction in tandem with other offenses might veer one into moral turpitude, which then might be grounds for denial of entry.
Options for entry to Canada
When convicted of a single misdemeanor DUI in the United States, there are options to enter Canada after one’s probationary period has been completed. This typically means that all aspects of the punishment have been completed, including the specific probation period (typically one year for a first offense), classes, fines, costs, fees interlock and license restriction period. Until all of this is complete, it is virtually impossible to enter Canada. One may as well not even think about it.
After all probationary items have been satisfied, it is extremely difficult, but not impossible to enter Canada over the next five years. The program is called a Temporary Resident Permit. The application for the permit must make a convincing case that there is a benefit to Canada of one’s admittance and activities that outweigh any perceived harm. Unfortunately, the economic benefits of a vacation typically do not fall under this category. This category is reserved for family emergencies, political figures, business travelers and entertainers who are benefitting the Canadian government and/or economy, and for humanitarian waivers. The application for a TRP is $200 Canadian, and is non-refundable. It is also advisable to have a lawyer versed in Canadian immigration issues assist with this application.
After five years from the end of probation, and with no other criminal violations, it becomes a bit easier. Here, one can apply for rehabilitation. Rehabilitation means that one leads a stable lifestyle and that they are unlikely to be involved in any further criminal activity.
One is eligible to apply for rehabilitation if they have committed an act outside of Canada that would require a sentence of ten years or less if committed in Canada; and five years have elapsed since the act or or five years have passed since the end of the sentence imposed. Canada uses whichever date is later. For instance, if one is convicted of a DUI that includes a year of a suspended/restricted license and probation, the five years does not start until the last day of probation.
The application is detailed and lengthy, but worth the trouble if one wants or needs to go to Canada. There is also a $200 Canadian fee, non-refundable.
After ten years from the end of a DUI probation, and with no subsequence offense, Canada deems one rehabilitated and allows entry.
Fortunately, Canada seems to be the only country with a strict travel restriction against United States Citizens convicted of a single misdemeanor DUI.
That said, it is not recommended to travel for leisure while enrolled in VASAP classes, as vacation is not an approved absence. Once VASAP classes are completed, and while still under the year of probation, one must notify their case manager if they intend to leave the country. | <urn:uuid:02f4071d-2f1b-4e1c-aaf7-3d8cf34ddba8> | CC-MAIN-2017-43 | https://overcomingadui.com/2017/08/10/travel-canada-after-dui-conviction/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824820.28/warc/CC-MAIN-20171021152723-20171021172723-00255.warc.gz | en | 0.959797 | 1,183 | 2.609375 | 3 | {
"raw_score": 2.5765790939331055,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Crime & Law |
Property Pages (Visual Studio)
By using property pages, you can specify settings for Visual Studio projects. To open the Property Pages dialog box for a Visual Studio project, on the Project menu, click Properties.
You can specify project settings so that they apply all build configurations, or you can specify different project properties for each build configuration. For example, you can specify certain settings for the Release configuration and other settings for the Debug configuration.
Not all available pages are necessarily displayed in the Property Pages dialog box. Which pages are displayed depends on the file types in the project.
For more information, see How to: Specify Project Properties with Property Pages.
When you use the New Project dialog box to create a project, Visual Studio uses the specified project template to initialize the project properties. Therefore, the property values in the template can be thought of as default values for that project type. In other project types, the properties may have different default values.
A project property value appears in bold if it is modified. A project property can be modified for the following reasons:
The application wizard changes the property because it requires a different property value than the one that is specified in the project template.
You specify a different property value in the New Project dialog box.
You specify a different property value on a project property page.
To see the final set of property values that MSBuild uses to build your project, examine the preprocessor output file, which you can produce by using this command line: MSBuild /preprocess:preprocessor_output_filenameopt project_filenameopt
When you view the Property Pages dialog box for a project and the project node is selected in Solution Explorer, for many properties, you can select inherit from parent or project defaults or modify the value another way.
When you view the Property Pages dialog box for a project and a file is selected in Solution Explorer, for many properties, you can select inherit from parent or project defaults or modify the value another way. However, if the project contains many files that have property values that differ from the project default values, the project will take longer to build.
To refresh the Property Pages dialog box so that it displays the latest selections, click Apply.
Most project defaults are system (platform) defaults. Some project defaults derive from the style sheets that are applied when you update properties in the Project Defaults section of the General configuration properties page for the project. For more information, see General Property Page (Project).
You must define the value for certain properties. A user-defined value can contain one or more alphanumeric characters or project-file macro names. Some of these properties can take only one user-defined value, but others can take a semicolon-delimited list of multiple values.
To specify a user-defined value for a property, or a list if the property can take multiple user-defined values, in the column to the right of the property name, perform one of the following actions:
Type the value or the list of values.
Click the drop-down arrow. If Edit is available, click it and then in the text box, type the value or list of values. An alternate way to specify a list is to type each value on a separate line in the text box. On the property page, the values will be displayed as a semicolon-delimited list.
To insert a project-file macro as a value, click Macros and then double-click the macro name.
Click the drop-down arrow. If Browse is available, click it and then select one or more values.
For a multi-valued property, the inherit from parent or project defaults option is available when you click the drop-down arrow in the column to the right of the property name and then click Edit. By default, the option is selected.
Notice that a property page only displays the settings at the current level for a multi-valued property that inherits from another level. For example, if a file is selected in Solution Explorer and you select the C/C++ Preprocessor Definitions property, file-level definitions are displayed but inherited project-level definitions are not displayed. To view both current-level and inherited values, click the drop-down arrow in the column to the right of the property name and then click Edit. If you use the Visual C++ project model, this behavior is also in effect for the objects on files and projects. That is, when you query for the values on a property at the file level, you will not get the values for that same property at the project level. You must explicitly get the values of the property at the project level. Also, some inherited values of a property may come from a style sheet, which is not accessible programmatically. | <urn:uuid:7b8cd2b7-e2e5-4880-bb73-058de3c92ab8> | CC-MAIN-2014-41 | http://msdn.microsoft.com/en-US/library/675f1588(d=printer,v=vs.110).aspx | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124771.92/warc/CC-MAIN-20140914011204-00263-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | en | 0.721971 | 977 | 2.71875 | 3 | {
"raw_score": 2.2748773097991943,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Software Dev. |
Miss Clifford, Miss Dixon
assisted by Debbie
Please support your children to keep their times tables knowledge running smoothly! We rely more and more on quick recall of mental maths facts to underpin our learning. Time spent practising on Times Tables Rock Stars forms part of their weekly homework.
Please find the Maths Methods Handbook here
Reading records and reading books must be in school every day.
Bring a water bottle to school each day.
PE is on a Thursday and children should come to school wearing their kit.
The Boutcher Mile is on Friday. Children should have their trainers available.
Our library session is on a Thursday so please have your library book in school to renew or change.
Children will receive homework on Wednesday.
In Year Five, weekly homework is:
daily reading (with responses in reading journal)
comprehension x 1
ten spellings words and sentences plus a Readiwriter activity
practise of times tables through Times Tables Rock Stars (minimum of 25 minutes spent each week)
All homework needs to be completed for Wednesday morning. Please upload the spelling sentences to Google Classroom to be marked. The comprehension book should be brought into school for Wednesday to be marked in class.
Children will be tested on their spellings each Wednesday and their score will be recorded in their reading journal.
We will begin the term with a beautifully illustrated text titled Mythologica. This book has been chosen as it again links to our study of Greek Myths as part of our historical study of Ancient Greece. We will uncover the colourful lives of 50 powerful gods and goddesses, earth-dwelling mortals and terrifying monsters.
Our first PSHE topic of the spring term is Dreams and Goals. In this unit, children will think about their aspirations, how to achieve their goals and the emotions that go with this. The unit will be split into the following lessons:
After half-term, they will move on to the topic, Healthy Me where they will learn about keeping safe and healthy. The unit will be split into the following lessons:
The first RE topic for the spring term answers the question: How does Holy Communion build a Christian community? This question will be broken down into weekly lessons and will cover the following points:
• what Jesus said and did about Communion and how Christians remember this;
• how and why Christians share in the Body and Blood of Jesus;
• how the act of sharing Communion demonstrates God’s Peace;
• the legacy of Jesus and how it may help Christians today in their legacy.
The first science topic of the spring term is Living Things and Their Habitats. In this topic, children will describe the life process of reproduction in some plants and animals including mammals and they will describe the differences in the life cycles of an amphibian and an insect by exploring complete and incomplete metamorphosis.
In our next topic, children will learn about forces and this will include gravity, air resistance, water resistance and friction and they will have the chance to create their own simple mechanism too. There will be plenty of opportunities for them to test and carry out fair investigations during lessons.
Topical Talk is a programme from the Economist Educational Foundation which supports discussions about current affairs. For children in Years Five and Six, we hold regular lessons where they explore a current story in the headlines. The activities foster progress in listening, speaking, problem-solving and creativity skills.
Year 5 will be learning about Ancient Greece this term. This will be an exciting topic, which I know they are looking forward to! Throughout this topic, they will build on their knowledge of empires, learn more about daily life there, make some comparisons between life in Athens and life in Sparta and know that the Olympic Games is one example of a legacy of Ancient Greece. They will also be learning about the beliefs of the ancient Greeks during their reading and writing lessons.
In Geography, they will do a comparative study between two contrasting locations.
In art, children will learn more about the artist Van Gogh and explore his works. They will use their sketch books to improve their mastery of art and design techniques, with focus on painting.
We will begin this term with the book, The Odyssey, which links perfectly with our history topic of Ancient Greece. This illustrated story takes children on an adventure with the greatest of heroes - Odysseus - as he battles great monsters, gods and mortals on his voyage home to Ithaca. Children will use this book as a prompt for their own writing and will finish the sequence by writing their own epic adventure story. | <urn:uuid:e47d382c-1ecd-4188-96c9-de8102a1e795> | CC-MAIN-2023-23 | https://www.boutcher.southwark.sch.uk/year-5 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650264.9/warc/CC-MAIN-20230604193207-20230604223207-00673.warc.gz | en | 0.947021 | 997 | 3.328125 | 3 | {
"raw_score": 2.0471208095550537,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Education & Jobs |
Constipation is one of the most common digestive complaints worldwide, with everyone getting constipation at some time in their lives. Though not usually serious, it can nonetheless be uncomfortable and frustrating. In the USA alone, constipation affects 2 per cent of the adult population, accounting for up to 2.5 million doctor visits annually and medication costs worth millions of dollars.
A local study published in the Singapore Medical Journal in 2000 showed constipation affected about 7.3 per cent of people aged 16 years and above.
What is constipation?
Constipation is a symptom and not a disease in itself. Constipation occurs when bowel movements become difficult or less frequent. The normal length of time between bowel movements ranges widely from person to person, but generally when movements stop for more than three days, the stools or faeces become harder and more difficult to pass. Patients may experience abdominal bloating, cramping pain or even vomiting.
Although definitions vary, one is considered constipated if there are two or fewer bowel movements in a week, or if one has two or more of the following symptoms for at least three months:
- Straining during a bowel movement more than 25 per cent of the time
- Hard stools more than 25 per cent of the time
- Incomplete evacuation more than 25 per cent of the time
What causes constipation?
There are numerous causes of constipation and it is not always possible to identify a definite cause in each patient. The majority of patients can be managed conservatively and the doctor’s vital role is to identify more serious causes that might require surgical treatment. Causes may include:
- Dietary disturbances (inadequate water intake, too little or too much dietary fibre, disruption of regular diet or routine)
Related article: Are fibre supplements as good as fibre-rich foods?
- Inadequate activity/exercise or immobility
- Excessive/unusual stress
- Medical conditions: hormonal (hypothyroidism), neurological (stroke, Parkinson’s disease), depression, eating disorders
- Medications (antacids containing calcium or aluminium, strong pain medicines like narcotics, anti-depressants, iron pills)
- Colon cancer
Related article: 7 tips to lower your risk of colon cancer
When is constipation a cause for serious concern?
Most people do not need extensive testing. Only a small number of patients with constipation have a more serious underlying problem. Symptoms that could point to more serious causes that warrant early attention include:
- Constipation that is a new problem for you.
- Your constipation has lasted more than two weeks.
- You have blood in your stools.
- You are losing weight even though you are not dieting.
- Your bowel movements are associated with severe pain.
- You are more than 50 years of age with a family history of colorectal cancer.
The vast majority of patients with constipation do not have any obvious underlying illness (secondary constipation) to explain their symptoms, and they suffer from one of two types of functional constipation (primary constipation):
- Colonic inertia: A condition in which the colon contracts poorly and retains the stools.
- Obstructed defecation: A condition in which a person excessively strains to expel the stools from the rectum. This may be due to a lack of coordinated anal muscle contractions or structural problems like rectal prolapse or a combination of both.
Nonetheless, these problems can be difficult to manage and can significantly affect one’s quality of life. Occasionally, functional constipation may be part of a more complex pelvic floor disorder. As such, even after excluding more life-threatening causes like cancer, persistent constipation should not be neglected and patients can still benefit from specialist help.
Public Forum: Find out how to “Stay in Control” by attending the SGH public forum on pelvic floor disorders on Saturday, 26 October 2013, 1-5pm. Topics include constipation, faecal/urinary incontinence and womb/vaginal prolapse. Venue: Suntec City Convention Centre, Level 3, Seminar Rooms 300-302. Entrance fee: $5/pax. To register, please call 6576-7658 or 6326-5151 from 10:30am-5:30pm, Monday-Friday, or email [email protected] by 23 October 2013.
FREE Doctor Q&A: Do you suffer from severe eye pain or poor vision due to a diseased cornea? When the cornea becomes cloudy, light rays can’t reach the retina, leading to reduced vision. Undergoing a corneal transplant can be an effective means of restoring your vision. This month, take this opportunity to send us any question you may have about corneal transplants.
Get more health tips at HealthXchange.com.sg and sign up for our FREE e-newsletter.
Health Xchange's articles are meant for informational purposes only and cannot replace professional surgical, medical or health advice, examination, diagnosis, or treatment. | <urn:uuid:6b726ce1-5360-4771-b9f7-bb297a33c462> | CC-MAIN-2016-22 | https://sg.news.yahoo.com/blogs/fit-to-post-health/constipation-life-threatening-064005346.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275764.85/warc/CC-MAIN-20160524002115-00036-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.92452 | 1,055 | 3.09375 | 3 | {
"raw_score": 1.8808711767196655,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Health |
Learning is an integral part of our lives and it never stops at any level. Because of this, we have to improve our learning abilities so that we can acquire knowledge easily and understand the concepts we take in much faster and better.
There are many ways that are known to help boost our learning abilities as humans like emptying our minds and being attentive to what we are learning as well as visualization, just to name a few.
Another great way to get better at learning is through consistent meditation.
And here we are going to have a deeper look at how meditation helps with learning and what research studies have found out in regard to this subject.
Getting to Know The Art of Learning Better
Learning involves gaining certain skills or knowledge, by way of being taught, studying, or through experience.
People take the time to learn a specific thing because they want to understand it well or because they want to be good at something in the case of a skill or the like.
Look at it this way, to be good at driving, for example, one has to attend driving classes so that they are guided on how to get a car running, balance it on the road and park it properly.
The same case applies to learning anything else out there.
If one wants to understand how to bake or draw or do anything, they will look for information that can help them on the same.
Now, learning capacities for people vary considerably with various individuals and this is dependent on a few factors such as personality, motivation, the readiness and willingness to learn, attention, health of the learner, and the level of aspiration to achieve, among many other factors.
As a result of this, there are people that experience learning difficulties, and thus their ability to gain knowledge and skills at the same rate as others is reduced.
Because of this, people keep seeking ways through which they can improve their learning capabilities, and the most common methods explored include; finding opportunities that allow them to exercise what they have learned, eating foods that boost mental strength, engaging in collaborative learning, avoiding multitasking, taking notes, and meditating.
There are countless other ways that help improve learning but our main focus will be on meditation.
Meditation as a practice is mostly known for personal improvement in areas such as finding peace and inner calmness, being more aware, loving, and kind, but there is more to it than just that.
It can also help us improve the way we interact with learning and therefore have an easier time absorbing new concepts and making them work in our lives.
Many people are exploring it individually, but teachers as well are starting to make use of it in classrooms. The idea behind the whole approach is to aid their students to make the most of their learning and especially in utilizing the knowledge that they gain.
Let’s take a deeper look at exactly how meditation positively impacts learning.
Learning Benefits of Meditation
1. Helps You Reducing Mind Wandering And Thought Racing
As human beings, we tend to think about a lot of things from our work or school, to our families, to our financial lives, and how we relate with others.
These thoughts tend to keep our minds busy which might make a hindrance to learning smoothly.
Meditation is a great way to clear our minds.
When you meditate, you are able to get rid of excess mental clutter which is based on worrying about the future and dealing with constant fears as well as stress, and through this, you are able to focus on a specific task, thus improving how you perform it (1).
Once your mind is clear, you are able to grasp new information and digest it easily and with that, you are able to apply what you have learned with a fair amount of ease.
2. Helps With Improving Memory
Memory is a crucial element of our brains when it comes to learning because it allows us to remember what we have learned.
You are able to retrieve the information you gained and get to apply it in various situations.
Meditation has been found to improve memory in various ways.
For instance, it enhances different areas of the brain such as increasing the cortical thickness in the hippocampus, a region of the brain that is associated with memory and learning abilities (2).
In addition to this, meditation results in the formation of more nerve cells, causing them to increase and boost their interconnectivity, protecting them from age-related wear and tear, which results in a healthy brain that is easily receptive to information (3).
3. Helps With Reducing Anxiety
Anxiety interferes with working memory which is associated with learning and memory.
When you are anxious, it is hard for you to think straight and understand properly what you are being taught.
An anxious learner reasons and works less efficiently and hence their learning capability is reduced.
To deal with this naturally, you can engage in a brief session of meditation to help you relax and be calm.
And then make an effort of meditating frequently so that you are able to keep anxiety at bay.
Meditation helps you overcome worrying thoughts and accordingly, gives you mental clarity.
With mental clarity, learning for you becomes a seamless process.
4. Helps With Improving Your Level of Attention And Concentration
There are many forms of meditation that are known to help the practitioner become more focused and attentive. The two most popular ones include mindfulness meditation and focused attention meditation.
Concentration meditation also helps you raise your level of concentration and get to be more productive in your undertakings, including learning.
Learning requires the student to concentrate and remain attentive and focused to what they are learning about so that they can understand perfectly how what they are learning about works, and be able to work with it comfortably.
When you make meditation a habit, you find it easy to remain attentive and focused on your objective than when you meditate only when you are about to learn or not meditate at all.
Doing it for a day or two will give you benefits that are short-lived but by committing yourself to the practice, you build these abilities within you.
Research Studies on How Meditation Affects Learning
Research studies on students that have used various meditation practices to venture into how helpful meditation is for learning have shown that meditation has a positive effect in the area.
A study investigating the effects of mindfulness meditation course on learning and cognitive performance among university students in Taiwan found that the mindfulness meditation course was beneficial in enhancing learning effectiveness, memory as well as attention among the students (4).
Another study looking at the mindfulness training and classroom behavior among ethnic minority and lower-income elementary school children showed that teachers discovered that students had better classroom behavior after participating in a short mindfulness-based program (5).
A 2016 study that also investigated the impact of mindfulness meditation intervention on academic performance discovered that students appreciated the mindfulness meditation process and agreed that it helped improve their learning efficiency in classroom environment (6).
Another study that involved psychology students receiving meditation training and then having a lecture that was followed by a quiz showed that meditation can improve the information retention ability of students (7).
Essentially, meditation has shown the potential to make an effective tool for learning.
Students who tend to form meditation habits have a higher chance of getting better academic results and improved behavior than those who don’t.
We believe that meditation should also be included in the curriculum so that more students are able to enjoy the benefits of the practice.
It is a natural practice that can even be done for a few minutes a day and have significant improvements on the one who is practicing it.
We hope that future research on this topic will help us understand better which styles of meditation would make a good fit for educational centers with students from different cultures and beliefs and how to include them in a way that will benefit the students.
Are you a student who practices meditation and feel it has helped you with your studies? Please share your experiences and thoughts on the subject in the comments below.
We look forward to hearing from you. 🙂 | <urn:uuid:1e4ef219-f039-4212-94e7-72887685e8ac> | CC-MAIN-2021-10 | https://improveyourbrainpower.org/meditation-and-learning/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00507.warc.gz | en | 0.968834 | 1,643 | 3.25 | 3 | {
"raw_score": 2.693831443786621,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Education & Jobs |
Persuasive essay on school bullying
2013-09-11 out of school and even commit suicide” in short, bullying needs to stop bullying is a criminal act bullies harass, make threats, persuasive essay bullying sample author. Bullying persuasive essay bullying persuasive essay how to write a persuasive essay on bullying at school, guide for students from customessayordercombullying persuasive speech today i want to talk about bullying persuasive. Persuasive articles on bullying for victims of bullying, school is a living nightmare school is harassment, and pain persuasive bullying essay:.
2013-01-14 persuasive paper outline bullying what are the different sides to this topic who are the opposing sides there is only one side, which is to stop the article is saying that school bullying would exist until schools do. 2018-08-13 your opinion on this persuasive essay on school bullying i want to try to convince readers the dangers of bullying and how they can make a difference at their school your opinions what do you think i'll score on it did. Here given is a professionally-written essay sample on the topic of teenage bullying feel free to read this plagiarism free paper at your convenience against school uniforms persuasive essay analytical essay conclusion.
Thesis writing phd comics persuasive essay about bullying average dissertation length political science college application essay service 100 successful. Persuasive essay on bullying get an professional essay by admin clinchers for persuasive essay death essays on bullying papers ' and custom persuasive essay school, or potential for. 100% free papers on persuasive on bullying essays sample topics, paragraph introduction help, research & more class 1-12, high school & college . 2011-06-01 bullying in school june 1, 2011 by carrieann13 gold, goodsoil, bullying can happen in school, if i was your teacher,this essay would have an 'a' no doubt about itspectacular job. Bullying persuasive speech today i want to talk about bullying persuasive speech everyone has probably been bullied at least once in their life it’s not the best feeling, is it it makes one feel insecure, humiliated and.
How to write a world literature essay persuasive essay on bullying custom thesis problem in schools todayhow to write a persuasive essay on bullying at school, writersthis is my persuasive essay about bullying. 2018-05-29 bullying persuasive essay celeste velinova loading the simple message that brought this middle school class to tears - duration: student bullying persuasive speech - duration:. 2018-03-02 persuasive essay about bullying - proofreading and proofediting help from top professionals stop bullying persuasive about bullying and unlike an argumentative essays, simply rely persuasive essay - an essay for school. 15 impressive college persuasive essay topics on bullying there are a lot of things that you can be able to write in school when you are given a simple task to handle it is important that you know where to start writing, and.
2013-03-20 ← persuasive essay potential topic and advocacy project ideas persuasive essay outline: cyber bullying posted on march 20, 2013 by roshan promisel do you know if your middle school or high school had a punishment. 2012-02-07 i'm doing an essay on bullying and i have my three arguments (i'm against obviously) 1 it leaves mental scars as well as physical scars too 2 bullying is often the cause of people quitting school 3 & in. 2012-12-19 persuasive essay: cyber bullying cyber bullying prezi by kate baker on 19 december 2012 tweet comments (0) please over 14 percent of high school students have considered suicide, and almost 7 percent have attempted it.
- Persuasive essay on bullying in schools: stop bullying it hurts looking for a sample paper don't waste time school bullying occurs within a learning institution and may range from verbal abuse to physical torture.
- Persuasive essay – bullying for many years now the issue of bullying has been present as students reach a higher grade in school physical bullying can become more common and can even increase to more violent levels,.
- The article is a persuasive essay about bullying it contains a number of reasons and ways to solve this issue.
Sample persuasive essay on bullying students as easily with bulling at school bullying is very simple for descriptive essays, the basis of the persuasive essay bullying sample essays,. Where can i get help writing a speech persuasive essays on bullying diversity training research persuasiveon how to write an efficient persuasive essay on bullying in schools for victims of bullying, school is a living. Persuasive essay bullying sample - download as pdf file (pdf), text file (txt) or read online. Follow the guidelines from essayvikingscom on how to write an efficient persuasive essay on bullying in schools you can find useful speech right here. | <urn:uuid:40bb12fb-0aca-4a33-898c-e5d9acc84168> | CC-MAIN-2018-43 | http://vwhomeworkbqtr.communiquepresse.info/persuasive-essay-on-school-bullying.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512411.13/warc/CC-MAIN-20181019145850-20181019171350-00301.warc.gz | en | 0.944888 | 959 | 2.609375 | 3 | {
"raw_score": 2.4215035438537598,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Education & Jobs |
The Athabascan people traditionally lived in Interior Alaska, an expansive region that begins south of the Brooks Mountain Range and continues down to the Kenai Peninsula. There are eleven linguistic groups of Athabascans in Alaska. Athabascan people have traditionally lived along five major river ways: the Yukon, the Tanana, the Susitna, the Kuskokwim, and the Copper river drainages. Athabascans were highly nomadic, traveling in small groups to fish, hunt and trap.
Today, the Athabascan people live throughout Alaska and the Lower 48, returning to their home territories to harvest traditional resources. The Athabascan people call themselves ‘Dena,’ or ‘the people.’ In traditional and contemporary practices Athabascans are taught respect for all living things. The most important part of Athabascan subsistence living is sharing. All hunters are part of a kin-based network in which they are expected to follow traditional customs for sharing in the community.
Athabascan House Types and Settlements
The Athabascans traditionally lived in small groups of 20 to 40 people that moved systematically through the resource territories. Annual summer fish camps for the entire family and winter villages served as base camps. Depending on the season and regional resources, several traditional house types were used.
Athabascan Tools and Technology
Traditional tools and technology reflect the resources of the regions. Traditional tools were made of stone, antlers, wood, and bone. Such tools were used to build houses, boats, snowshoes, clothing, and cooking utensils. Birch trees were used wherever they were found.
Athabascan Social Organization
The Athabascans have matrilineal system in which children belong to the mother’s clan, rather than to the father’s clan, with the exception of the Holikachuk and the Deg Hit’an. Clan elders made decisions concerning marriage, leadership, and trading customs. Often the core of the traditional group was a woman and her brother, and their two families. In such a combination the brother and his sister’s husband often became hunting partners for life. Sometimes these hunting partnerships started when a couple married.
Traditional Athabascan husbands were expected to live with the wife’s family during the first year, when the new husband would work for the family and go hunting with his brothers-in-law. A central feature of traditional Athabascan life was (and still is for some) a system whereby the mother’s brother takes social responsibility for training and socializing his sister’s children so that the children grow up knowing their clan history and customs.
Traditional clothing reflects the resources. For the most part, clothing was made of caribou and moose hide. Moose and caribou hide moccasins and boots were important parts of the wardrobe. Styles of moccasins vary depending on conditions. Both men and women are adept at sewing, although women traditionally did most of skin sewing.
Canoes were made of birch bark, moose hide, and cottonwood. All Athabascans used sleds –with and without dogs to pull them – snowshoes and dogs as pack animals.
Trade was a principle activity of Athabascan men, who formed trading partnerships with men in other communities and cultures as part of an international system of diplomacy and exchange. Traditionally, partners from other tribes were also, at times, enemies, and travelling through enemy territory was dangerous.
Traditional regalia varies from region to region. Regalia may include men’s beaded jackets, dentalium shell necklaces (traditionally worn by chiefs), men and women’s beaded tunics and women’s beaded dancing boots. | <urn:uuid:3f9438cf-7316-4f76-9c84-687994fbaf78> | CC-MAIN-2022-40 | https://www.alaskan-natives.com/80/athabaskan-people-alaska/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00413.warc.gz | en | 0.969464 | 822 | 4.03125 | 4 | {
"raw_score": 2.137425422668457,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | History |
Perspectives on mental health and sexuality:
Qualities of a healthy relationship:
- Based upon trust and equal power
- Makes you feel respected as a person
- Makes you feel important with something to contribute
- Encourages shared responsibility between partners
- Practices open and honest communication
- Each member is willing to compromise
- The privacy of each person is respected
- Each member accepts responsibility for her or his actions
- Makes you feel your opinion is valuable
- There is room for disagreement, and each member is willing to compromise
- Conflict is managed in a respectful manner, without violence
- Helps you grow as a person and retain your sense of self
- Decisions are shared and discussed truthfully
- Free from intimidation, threats, coercion, or violence
- Sexual activity is based upon consent (to agree by free will)
- Concerned about a relationship? Does it:
- Help you feel good about yourself?
- Encourage you to have outside interests and friendships?
- Have characteristics you enjoy, and can you give examples of these?
- Accept your needs for space and privacy?
- Value your opinion and seek it before making major decisions?
- Treat you with respect and courtesy?
- Depend on honesty, respect, and shared giving and responsibility?
- Have reasonable expectations?
- Have room for family and other friends?
*Adapted by the University of Washington Counseling Center
What is Consent?
To consent means to give approval and to agree by free will.
Consent is based on choice.
Consent is active, not passive.
Consent is possible only when there is equal power.
Giving in because of fear is not consent.
In consent, both parties must be equally free to act.
Going along with something because of wanting to fit in, feeling bad, or being deceived is not consent.
In consent, both parties must be fully conscious, and have clearly communicated their consent.
If you can’t say “NO” comfortably, then “YES” has no meaning.
If you are unwilling to accept a “NO,” then “YES” has no meaning.
Source: Sexual Assault Prevention through Peer Education. Carrothers & Rypisi, 1997
Depending on your identity and the dynamics of your relationship, there may be several factors to consider as you reflect how healthy your relationship is. Many unhealthy relationships will follow a certain pattern of behaviors. Click here for a list of variant models of the power and control wheel, a tool commonly used to graph unhealthy relational patterns. http://www.ncdsv.org/publications_wheel.html
Need help? Get connected:
King County Coalition Against Domestic Violence http://www.kccadv.org
Domestic Abuse Women’s Network http://dawnonline.org
NW Network http://nwnetwork.org
Gay Men’s Domestic Violence Project http://gmdvp.org | <urn:uuid:2f62bb47-ba87-472e-92d6-a154e880b23c> | CC-MAIN-2018-05 | http://drmattgoldenberg.com/relationship-health/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886946.21/warc/CC-MAIN-20180117142113-20180117162113-00518.warc.gz | en | 0.940034 | 623 | 3.171875 | 3 | {
"raw_score": 2.316274404525757,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Health |
The administration of the earth assets to reestablish and keep up harmony between human prerequisite and different species on the planet is called CONSERVATION. Protection incorporates consistent restoring of sustainable assets. It is additionally connected to preserve non-inexhaustible sources.
Air is a few kilometers thick cover of’ atmosphere encompassing the earth. Air is critical common asset. It comprises of these gasses: Nitrogen (79%), Oxygen (20%). Carbon dioxide (0.03%), Traces of idle gasses called honorable gasses.
Oxygen is devoured amid breath. CO, and nitrogen are utilized as crude materials in a characteristic cycle for making nourishment and different substances. These substances are required in living framework. Air is being contaminated quickly because of industrialization and autos. Dirtied air contains certain gasses like carbon monoxide, hydrocarbons and oxides of nitrogen and sulfur. Taking after strides can be taken to preserve air:
Air contamination ought to be checked.
Plants ought to be planted to expand the generation of oxygen. They additionally ingest toxins.
Generation of over the top CO2 ought to be decreased to check green house impact.
Natural benevolent fuel ought to be utilized.
75% of the earth surface is secured with water. It is likewise a part of soil and air. It is likewise a noteworthy constituent of living beings. It is 70 to 90% of the body weight. Around 97% of the aggregate water of planet earth is in seas. 2% water is as solidified ice — tops and just 1% as accessible new water in lakes, streams and waterways.
Principle utilization of water are: Domestic/water system utilize 10%, Industrial usage 90%. Water is utilized as crude material in making assortment of foodstuff, beverages, fluid cleansers and numerous different items. Sodium chloride is gotten from seawater, (table salt). This sodium chloride is utilized as a part of cooking and assembling of other valuable chemicals, for example, chlorine and sodium hydroxide. Taking after measures can be taken to monitor water assets:
Mechanical concoction squanders are to a great degree harmful. They hinder the characteristic refinement of water. This filtration is completed by microorganisms. Synthetic squanders debase the stream water and make it hurtful for amphibian life. On the off chance that contamination of new water assets proceeds, there will be lack of crisp water supply in future. In this way, measures ought to be taken to enhance water sources.
Watershed administration ought to be utilized to give clean water to water system.
Reusing plants of water ought to be introduced in the urban communities.
Soil is destroyed because of poor land administration. Man alters it as per his own necessities. There is concentrated cutting of trees and overgrazing. It causes soil disintegration. In this manner, efficiency of land is diminished. A low precipitation can give direct efficiency. Be that as it may, the land must have adequate vegetation cover to hold rain water. Man is obliterating such vegetation. It causes desertification.
Soil preservation includes maintenance of water. This water is utilized for developing and supporting vegetation. Its point is to keep up or increment soil ripeness and efficiency. Vegetation covers check the disintegration viably. Form furrowing and strip trimming are likewise vital for checking disintegration.
More prominent comprehension of harvest, soil and atmosphere is likewise an essential for better administration of the dirt. Soil administration likewise incorporate better dropping framework, trim turn, treatment of yield deposits, waste of waterlogged and swampy soils, cleaning of lethal salts from the flooded terrains and water system of grounds.
Conservation of biodiversity
The variety of living life forms in a biological system is called biodiversity. The correct number of species on the planet is not known. The taxonomists have depicted around 1.4 million species. Be that as it may, taxonomists gauge that there are 4 to 30 million more species. A lot of these animal categories are unnoticed. Taking after strides can monitor biodiversity:
- Procedure of deforestation ought to be Checked entirely.
- The procedure of aforestation reforestation ought to be begun.
- Steps ought to be taken to check desertification.
- Chasing ought to be prohibited.
- National parks ought to be set up.
- Simulated breading ought to be done for imperiled species.
- Laws ought to be instituted to monitor characteristic assets.
Conservation of Energy
We have faced energy lack in the most recent decade. This lack is brought about because of quickly diminishing supplies of non-renewable energy sources. The petroleum derivative is available as coal. oil and petroleum gas. The energy assets can likewise be named unlimited and modest.
(an) Inexhaustible energy asset: These assets incorporates sun oriented energy, falling water (hydropower), wind, .sea warm inclinations, waves, tides, streams, geothermal and biomass.
(b) Exhaustible energy assets: These assets are petroleum products like coal, oil and regular gas.Exhaustible energy assets are available in settled amount in earth. So they have restricted amounts and they will be fumes at some point or another. It will influence the standard and way of life without bounds residents. Consequently these sourtes ought to be utilized legitimately and different wellsprings of energy ought to be looked.
The energy sources on earth are restricted. Henceforth, there ought to be adjusted and arranged utilization of energy assets. We can spare energy by taking after ways:
I. Create and utilize energy productive machines, motors and assembling forms.
- Lessen wastage by reusing.
- There ought to be minimum utilization of vehicle. Pass by walk or by open transport.
- Turn off lights and electrical apparatuses when they are not being used.
- Limit the utilization of aeration and cooling system.
- Protection of Forests
The backwoods have awesome monetary and environmental significance. In any case, deforestation has decimated these common assets. Taking after strides can be taken to moderate woodlands:
I. Procedure of A forestation and reforestation ought to be begun.
- Over reaping of the woodland ought to be prohibited
- Brushing ought to be checked in the woodland.
- Nurseries ought to be built up. | <urn:uuid:ee160268-ac0a-43a1-9593-80d2e1ca6718> | CC-MAIN-2018-47 | http://botanystudies.com/conservation-of-air-water-soil-and-energy/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741151.56/warc/CC-MAIN-20181112215517-20181113001517-00317.warc.gz | en | 0.924785 | 1,300 | 3.359375 | 3 | {
"raw_score": 2.1936612129211426,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Science & Tech. |
"We study and take as an example the Soviet system, but we are developing socialism in our country in somewhat different forms." - Marshal Josip Bronz TitoQuick Points
New Name: The Socialist Federal Republic of Yugoslavia
New Coat of Arms:
New constitution condemns the ruling of the nomenklatura and centralized bureaucratic governments.
Many powers of the Federal Government passed to the Republic Governments.
Many powers of the Republic Governments passed to the Communes.
3 Tier Governing System (Federal, Republic, Commune)
Increased sovereignty for the Socialist Autonomous Province of Kosovo and Socialist Autonomous Province of Vojvodina
Election Date: 29 November 1951
Communes not answerable to the Republic nor Federal Government
Decreased power of the Communist Party of Yugoslavia (new role is education for management)
The bureaucratic Presidium along with all other governing institutions not mentioned.GOVERNMENT
Is the executive branch of The Socialist Federal Republic of Yugoslavia. It consists of 300 delegates. The delegates will equally represent districts in Yugoslavia. Currently each delegate will represent approximately 50,000 Yugoslavs (ooc: assuming the population of Yugoslavia is 15 million). The Federal Assembly is responsible for constitution changes and for choosing the direction of The Socialist Federal Republic of Yugoslavia (foreign policy, economy, military, spending).Republic Assembly
Exist in each of the six republics and two autonomous provinces. They are free to locally govern in any way as long as they do not contradict federal policy. Communes
Local government in Yugoslavia was based on the unique institution of the commune, defined as "a selfmanaging sociopolitical community based on the power of and selfmanagement by working-class and all working people." Currently there are 250 communes in The Socialist Federal Republic of Yugoslavia and they are growing in size and number. The communes held all political authority not specifically delegated to government at the federal or republic level; unlike other bureaucratic systems, the communes truly embolden the workers. Communes control functions such as economic planning, management of utilities, and supervision of economic enterprises were the responsibility of the commune. The Federal Executive Council
The Federal Executive Council (FEC) is a 14 person council responsible for everyday governing operation of the government. The FEC consisted of a prime minister and two deputy prime ministers who are elected Assembly, and the secretaries, of the twelve major federal bureaucracies (the secretariats of finance, foreign affairs, defense, labor, agriculture, industry and energy, internal affairs, foreign economic relations, domestic trade, transport and communication, development, and legal and administrative affairs). Any Republic or Autonomous Province not represented in the FEC gets one representative minister without portfolio.
The FEC debated practical aspects of all national problems, making the FEC the most important national center of political debate, compromise, and influence. The FEC produces compromises on controversial issues among opposing republics, and second only to the party as a decision--making body. By definition, it controlled all federal bureaucracies and had exclusive access to expert information needed for policy making. Judicial System
The Federal Court
The Republic Court
Joining a trade union is voluntary. Strikes and work stoppages are neither legal nor illegal. The Confederation of Trade Unions of Yugoslavia is the official national trade union of Yugoslavia. Trade unions have the constitutional mandate of protecting the rights of workers and preserving the self-management system. There is no contradictory/adversary purpose of trade unions in Yugoslavia.ELECTIONS
Every citizen from age 16 and up can vote in the upcoming Federal and Republic elections. This differs sharply from Yugoslavia’s last election where only citizens from the age 18 and up, who didn't give support on any way to the reactionary Mihajlovic forces during the war, could vote. Any Yugoslav candidate can run for office, not just from the National Liberation. All Republic Assemblies and the Federal Assembly are up for election. No party can run a candidate. All ballots are secret.
ooc: i reserve the right to change stuff because i can't be sure if i missed anything | <urn:uuid:b803c09e-2fea-48e8-8825-91f3948cc89a> | CC-MAIN-2014-35 | http://z7.invisionfree.com/Postwar/index.php?showtopic=548&st=0 | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833525.81/warc/CC-MAIN-20140820021353-00311-ip-10-180-136-8.ec2.internal.warc.gz | en | 0.918191 | 830 | 2.953125 | 3 | {
"raw_score": 2.6021225452423096,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Politics |
By Beth Casteel
Having the ability to read and write are fundamental skills that are used in almost every aspect of everyday life. At an early age, children begin to develop their literacy by reading and having their parents read to them. When kids don’t have access to books at a young age, it can greatly impact their development.
For Judy Payne and Judi Kovach, this problem was something that they wanted to help solve. Creating the Kids’ Book Bank earlier in 2015, the two of them have been able to bring more than 900,000 books to kids in the Cleveland area.
Findings from the IEA Reading Literacy Study found that 61 percent of children from a low-income family grow up without a single book in their homes. With early brain development, books can be a fantastic tool to help children grow and not having access to books in the home can potentially make them fall behind.
“The kids’ book bank serves a need. We provide free books to kids in need,” Payne said. “The reality is, kids need to have books to be exposed to language, to fuel their imagine and to learn about the world beyond their neighborhood, so we want kids to have books.”
Prior to creating the Kids’ Book Bank, Payne and Kovach originally decided to do free book kiosks in areas around Cleveland called Little Free Libraries. The libraries, started in 2009 in Hudson, Wisconsin are meant to encourage people to bring a book and then replace it. This tends to work great in suburban neighborhoods but it’s a little bit of a different story in urban areas.
When Payne and Kovach first started doing these, they began to realize that kids would be so excited to get a book but they wouldn’t have anything to replace the book that they were taking. While kids could still take the books, it wasn’t getting restocked and that’s when the two decided to further these libraries and start a book bank where they would be better able to supply books.
To supply the books needed, they struck up a partnership with Discover Books, an online book seller whose largest processing location is in Toledo. With this partnership, they are able to bring thousands of books to Cleveland a month.
In order to get books in the hands of kids in the area, the Kids’ Book Bank relies heavily on their volunteers, who are the driving force behind the operation. After training volunteers, they are then responsible for sorting books by reading level and getting the books ready for their distribution partners.
Those books — that range from baby books all the way to teens — are sent to a variety of locations that include schools, child-care centers and agencies that teach parents to read to their children. From those places, the supply of books is constantly restocked in the area.
With a constant need to keep up with the demand of books, the book bank does everything in their power to ensure that young readers always have something to read. To do so, they specifically focus on getting volunteers during summer, which is a time where kids can lose ground in learning.
While Payne and Kovach stress that volunteers are most needed in the summer, any time during the year is a big help for the bank. Having shifts almost daily at their location in Midtown, Payne encourages potential volunteers to go on their website to sign up.
The shifts that volunteers can sign up for can vary in how many people are there, but there are typically 20 spots every day with two shifts on Saturdays. For large groups, like sororities or clubs, the bank arranges special times where those groups can come and help sort books.
“What’s fun is, as you sort through books, you’re going down memory lane,” Payne said. “It’s fun to bring friends and do job together.”
To volunteer at the Kids’ Book Bank, go to http://www.kidsbookbank.org/ | <urn:uuid:e8478c2d-7875-4600-b27e-5cf8d7cb4266> | CC-MAIN-2021-43 | https://csu-cauldron.com/2017/12/05/kids-book-bank-making-a-difference-one-book-at-a-time/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00255.warc.gz | en | 0.972103 | 824 | 3.203125 | 3 | {
"raw_score": 2.5505073070526123,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Education & Jobs |
The Medieval wonder that could welcome Trump
What’s so special about Britain's Westminster Hall? Our book, Wood, gets behind the politics
It’s big, old and wooden, occupies a unique place in world politics, and some people wonder how it got there in the first place. When President Trump comes to Britain he could have the honour of addressing the UK’s members of parliament beneath the hallowed beams of Westminster Hall, within the Houses of Parliament.
Many within British politics would prefer if the new US head of state not to put in an appearance here, yet fewer understand the cultural and architectural significance of this eleventh century.
“Westminster Hall dates from 1097,” writes William Hall in his new book, Wood. “Originally its roof was likely supported by three columns, but these were replaced in the reign of Richard II by the royal carpenter Hugh Herland, creating the largest clearspan [unsupported] medieval roof in England, some 73 x 20 m (240 x 66 ft.). The room has witnessed some extraordinary events: the trials of Guy Fawkes and William Wallace took place here; it survived a massive fire that destroyed much of the old Palace of Westminster in 1834; a Provisional IRA bomb exploded here in 1974; and Nelson Mandela was afforded the rare compliment of being invited to address both Houses here in 1996.”
Yet, perhaps the most interesting thing about the hall is not its famous visitors, but the incredible ingenuity that went into its construction.
“The most spectacular wooden structure from the European Middle Ages is the timber-framed roof of Westminster Hall in London, constructed between 1393 and 1397 (67). It spans an aisle of 21 m (68 ft.) without supporting posts,” explains the writer and broadcaster Richard Mabey in Wood’s introduction. “Though there are medieval roofs in Sicily with greater spans, they achieve this by the use of single giant conifers of a kind unavailable in England, and Westminster’s structure remains the most audacious and intelligent.
“The designer and master carpenter was Hugh Herland. He personally selected the oak trees in Surrey, probably in Alice Holt Forest, and then had them cut and jointed in a framing yard in Farnham. 690 tonnes (760 tons) of wood lay on the ground, each brace and mortise marked with Roman numerals, imagined and intuited by Herland into a gravity-defying canopy. The wood was then taken by cart and boat to London and assembled into what is basically a series of mutually supporting triangles of hammer beams, hammer posts and sweeping arches. How it works is still something of an enigma. It is one of those wooden marvels where the carpenters, having dismembered large numbers of trees, then reassembled them in what is essentially another super-tree, a formal arrangement of trunk and fractal branching. The writer William Bryant Logan has beautifully described how the structure allows gravity to flow along rafters, down walls and into the ground – the roof, Logan writes, ‘is a force made visible’.”
It remains to be seen whether those architectural forces will come into close contact with one of the newer sources of power within global politics. For more on this wood within architecture past and present order a copy of Wood here. And if you're looking for examples of how artists both celebrated and unknown have resisted the powers that be in recent times, check out Liz McQuiston's scholarly but thoroughly readable and copiously illustrated Visual Impact Creative Dissent in the 21st Century. | <urn:uuid:6f22ace2-998a-4ba1-ab0d-2b14d4a6d365> | CC-MAIN-2023-23 | https://www.phaidon.com/agenda/architecture/articles/2017/february/07/the-medieval-wonder-that-could-welcome-trump/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657735.85/warc/CC-MAIN-20230610164417-20230610194417-00395.warc.gz | en | 0.953335 | 738 | 3.015625 | 3 | {
"raw_score": 2.862736940383911,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Art & Design |
Sexuality and Faces - How does our "Gaydar" work ?
Most of us think we're pretty good at guessing when somebody's gay or straight, but what signals are we using to make our decision, and how often are we right ?
Psychologists at Queen Mary University of London are, for the first time, trying to isolate the individual signals and patterns in somebody's face, in order to work out exactly what motivates us to make a snap decision about sexuality.
Using cutting edge computer imagery, researchers have found a way of transferring male facial expressions onto female faces and vice versa, which means they can work out exactly how our "gaydar" works.
Dr Qazi Rahman, assistant professor in Cognitive Biology, and PHd student, William Jolly, are hoping that their research will challenge stereotypes and prejudice by increasing awareness of how quickly, and often inaccurately, people classify each other.
The Me Generation
Professor Jean Twenge from San Diego State University in California has already coined the phrase, "Generation Me", describing the growing number of people who take it for granted that the self comes first. And she's less than flattering abut the downsides of this fundamental cultural shift.
She talks to Claudia Hammond about her latest research using data mined from the American Freshman Survey. This study captures students' attitudes right back to 1966, and compares how current students rate themselves and their abilities compared to the generation 45 years ago. Unsurprisingly, she finds that the younger generation is more likely to view themselves as above average, even though these attitudes aren't born out by the facts.
IQ Tests and Learning Disabilities
Psychologists are considering whether guidelines on how learning disabilities are assessed should be revised, following concerns that IQ test scores could be depriving people of a formal diagnosis, and therefore access to services.
Dr Simon Whitaker, consultant clinical psychologist and senior visiting research fellow at Huddersfield University, has completed research which raises questions about the reliability and consistency of IQ scores for people with learning difficulties.
Current rules mean people must score less than 70 on an IQ test as well as fulfilling other criteria but Dr Whitaker claims IQ tests aren't reliable enough and that those missing out on a diagnosis are also missing out on access to services.
Dr Theresa Joyce, consultant clinical psychologist and the person leading the British Psychological Society Review on how learning disabilities are diagnosed and assessed, tells Claudia Hammond that a range of scores is used before a diagnosis is reached.
Producer: Fiona Hill. | <urn:uuid:215bbd15-3c19-4245-a74f-b613935dd988> | CC-MAIN-2014-23 | http://www.bbc.co.uk/programmes/b01ntfx0 | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894140.11/warc/CC-MAIN-20140722025814-00187-ip-10-33-131-23.ec2.internal.warc.gz | en | 0.951525 | 508 | 2.765625 | 3 | {
"raw_score": 3.013232469558716,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
St. Kilda, comprising the main island of Hirta together with Soay, Boreray, Dun and several notable stacks, lies 41 miles (66 km) west northwest of North Uist in the Outer Hebrides and 102 miles (165 km) west of the Scottish Mainland. This volcanic archipelago provides spectacular landscapes and includes some of the highest cliffs in Europe, which offer a refuge for colonies of endangered bird species. Forming a unique ecosystem, the islands support their own sub-species of mouse and wren, together with the world's largest gannetry, the largest British colony of fulmars and half of Britain's puffins.
The origin of the name is interesting - there is no Saint Kilda - rather it likely derives from a misinterpretation of the Old Norse word Skildar or Skaldir meaning 'shield' by 17th C. map-makers and became firmly established following the publication of the book A Late Voyage to St. Kilda whose author Martin Martin visited the islands in 1697. St. Kilda was not mapped in detail until 1927, the work of John Mathieson (1855 - 1945), a retired Ordnance Survey surveyor, who also recorded the placenames for the first time and its archaeology. He was assisted by Alexander Cockburn, who later returned to the islands to complete a map of its geology.
The rocks are distinct from the gneiss of the rest of the Outer Hebrides, representing one of the Tertiary volcanoes which marked the birth of the Atlantic Ocean and today observed as the igneous complexes of Ardnamurchan, Mull and Skye. Hirta features the frozen remains of the magma which once powered one of these volcanoes, now forming light-coloured granophyre, together with dark dolerite and gabbro. The other islands of the group are almost exclusively composed of the latter.
The islands have a great wealth of archaeology, including evidence of Bronze Age occupation and of Viking visits. They are thought to have been more or less continuously occupied for around 2000 years with habitation concentrated at Village Bay and Gleann Mor, although the small area previously cultivated has now reverted to grassland. For much of the last 800 years the islands were owned by the Macleods of Macleod, with two successive settlements being constructed at Am Baille in 1836 and 1865. The inhabitants harvested seabirds and grazed up to 2000 sheep.
Thomas Carlyle (1795 - 1881) clearly did not enjoy his journey out into the North Atlantic when he said that St. Kilda was "worth seeing but it is not worth going to see." The first tourist ship to visit the islands was the Glen Albyn in 1834, with the Dunara Castle calling in the summer from 1877. It was this ship which was to evacuate the island's residents in 1930. Modern cruise ships now pass regularly and occasionally land.
Following a series of external influences, including disease, malnutrition and emigration of many of the young men, the remaining population asked to leave in 1930. The following year the islands became the property of John Crichton-Stuart, the 5th Marquess of Bute (1907-56), who maintained them as a wildlife sanctuary and left them to the National Trust for Scotland in 1957. St. Kilda is now managed jointly with Scottish Natural Heritage. Part of Hirta was leased to the Ministry of Defence to build a radar station to monitor the Hebrides Missile Range to the east. The population today is restricted to transient staff associated with this base, a seasonal warden, scientific workers and summer visitors.
Declared a National Nature Reserve (1957), a biosphere reserve (1976), a Site of Special Scientific Interest (1981), the islands were made a UNESCO World Heritage Site in 1986 and designated a Special Protection Area in 1992. The World Heritage Site was enlarged in 2004 to encompass the surrounding marine environment and, the following year, extended to include the archipelago's unique cultural heritage, making St. Kilda one of only a few places in the world with dual World Heritage status. | <urn:uuid:eb198059-4200-4443-816b-75c966b71011> | CC-MAIN-2014-49 | http://www.scottish-places.info/scotgaz/features/featurefirst1956.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011456.52/warc/CC-MAIN-20141125155651-00181-ip-10-235-23-156.ec2.internal.warc.gz | en | 0.958681 | 854 | 3.0625 | 3 | {
"raw_score": 2.5809404850006104,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | History |
Summary: In 1954, the U.S. Supreme Court issued the landmark decision Brown v. Board of Education, which struck down racially segregated schools because, the court said, they were inherently unequal and they unjustly harmed poor and minority children. Last month, a California court cited Brown v. Board as it struck down multiple state laws, passed at the behest of teachers’ unions, which the court said unjustly protected incompetent teachers and unconscionably harmed children, especially the least fortunate.
In a landmark decision that sent shock waves through the educational establishment, Los Angeles Superior Court Judge Rolf Treu ruled last month that California’s teacher tenure laws unconstitutionally deprive students of their guarantee to an education and to equal rights. “The evidence is compelling,” Judge Treu wrote. “Indeed, it shocks the conscience.”
In Vergara v. California, nine students sued the State of California, claiming that ineffective teachers were disproportionately placed in schools with large numbers of “minority” and low-income students. Judge Treu agreed and quoted the U.S. Supreme Court’s 1954 Brown v. Board of Education decision that education “is a right which must be made available to all on equal terms.”
The Vergara decision came down less than one month after the 60th anniversary of the Brown decision, in which the U.S. Supreme Court struck down state and federal laws establishing separate public schools for students classified by the government as “white” and “black.” (In Brown, the Court consolidated cases from Kansas, Virginia, South Carolina, and Delaware, as well as the federal jurisdiction of Washington, D.C.) The Supreme Court found that the practice of segregation violated the provision in the U.S. Constitution that “No State shall make or enforce any law which shall . . . deny to any person within its jurisdiction the equal protection of the laws.”
The argument in the current case, Vergara, is that, by forcing schools to favor incompetent teachers with seniority over more capable junior teachers, the rules deprive students of the education that the state constitution guarantees them. Further, because these rules funnel bad teachers to districts with large numbers of poor and “minority” students, those students are denied the equal treatment of the law.
The Vergara lawsuit was backed by Students Matter, a nonprofit educational policy advocacy group funded by Silicon Valley entrepreneur David Welch. “The state has a responsibility of delivering an education for the betterment of the child,” said Welch. “The state needs to understand that [its] responsibility is to teach children, and teach all of them.” Welch’s organization recruited the nine students, from several school districts, to serve as the public face of the case.
Astonishingly, the teachers’ union response to the ruling was that it was actually an attack on children. “This decision today is an attack on teachers, which is a socially acceptable way to attack children,” said Alex Caputo-Pearl, the president-elect of the Los Angeles teachers union. Instead of providing for smaller classes or more counselors, the reformers “attack teacher and student rights.”
Welch answered that claim in an op-ed for the San Jose Mercury News in which he described the harm students suffer from bad teachers:
According to the testimony of Harvard economist Dr. Thomas Kane, a student assigned to the classroom of a grossly ineffective math teacher in Los Angeles loses almost an entire year of learning compared to a student assigned to a teacher of even average effectiveness. Students assigned to more than one grossly ineffective teacher are unlikely ever to catch up to their peers.
And far from wanting to attack all teachers, Welch in the same article pleaded with his fellow Californians to reward good teachers:
“Let’s offer teachers opportunities for promotions, such as to master teacher, teacher mentor, or department chair, where the skills of a truly excellent, creative educator can reach more children—as well as better pay with incentives for excellence and taking on extra responsibilities or difficult positions.”
No less a union friend than Rep. George Miller (D-Calif.), whose largest campaign support comes from unions, has bluntly admitted, “Vergara will help refocus our education system on the needs of students.” No wonder the teachers’ unions made five separate legal efforts to have the lawsuit dismissed on grounds other than the merits of the case.
California teacher union members number some 445,000. Both the California Teachers Association (CTA, an affiliate of the National Educational Association) and the California Federation of Teachers (CFT, an affiliate of the American Federation of Teachers) plan to appeal the court’s decision. Jim Finberg, a lawyer for the two teachers’ unions, said that Judge Treu’s decision “ignores overwhelming evidence the current laws are working.”
Actually, less than 0.002% of teachers in California are dismissed in any given year. Judge Treu noted that, when an effort is made to fire a teacher, “it could take anywhere from two to almost ten years and cost $50,000 to $450,000 or more to bring these cases to conclusion under the Dismissal Statute, and that given these facts, grossly ineffective teachers are being left in the classroom.”
Judge Treu concluded that “distilled to its basics,” the unions’ position requires them to defend the proposition that the state has a compelling interest in the de facto separation of students from competent teachers, and a like interest in the de facto retention of incompetent ones. The logic of this position is unfathomable and therefore constitutionally insupportable.
Seniority vs. merit
The Vergara decision overturned a LIFO (last-in/first-out) law requiring that teacher layoffs be based on seniority, rather than individual merit. California’s Permanent Employment Law required that a teacher be tenured after two years at a school (which, because of an early notice requirement, worked out in practice to 18 months or less). California is one of only five states in which tenure may be received after such a short period. As noted by the blog Voices of San Diego:
Regardless of what we call it, here’s how it looks in San Diego Unified. Once they’re hired, rookie teachers have to make it through a two-year probationary period, during which they can be dismissed for pretty much any reason.
But because the district has to tell teachers by mid-March whether they’ll be invited back for the next school year, the trial period is actually shorter than two years. In the past, the district hasn’t been particularly aggressive in the number of probationary teachers it sends away—only about 1 percent wasn’t given tenure.
“With such little time, you don’t even have enough information to actually consider whether they’re an effective teacher,” said Nancy Waymack, a managing director for the reform-advocacy group National Council on Teacher Quality.
Compared to other states, California has some of the strongest laws in place to protect teacher employment. The effect of this case may spur action throughout the nation. “Without a doubt, this could happen in other states,” said Terry Mazany, who served as interim CEO of Chicago’s public schools in 2010-2011. A lawyer for Students Matter said they are already hoping to “engage with policymakers in New York and nationally,” and donor David Welch said the group would consider suits in other states (New Jersey, Connecticut, Maryland, Minnesota, New Mexico, and Oregon were mentioned as possible sites).
The term “due process” refers to a legal or quasi-legal system that protects the rights of an individual, such as by requiring a trial before a person can be executed. Unions defend the complicated procedures for firing teachers by claiming they amount to “due process” that protects those teachers from arbitrary, unfair treatment. As the Pew publication Stateline reports, “The unions argue that the rules protecting teachers are needed for school districts to attract and retain good teachers and to ensure that employees are not fired for arbitrary or unfair reasons.”
But the judge ruled in Vergara that the process has become so cumbersome—that it’s become so difficult to get rid of bad teachers—that it deprives students of their rights. He ridiculed the process as “über due process,” and observed that California state laws already provide a great deal of protection for government and private-sector employees facing dismissal. “Why,” he pleaded, “the need for the current tortuous process” that is mandated only for teachers, a process so unjust, he added, that it was even decried by witnesses called by the teachers’ unions?
James Taranto of the Wall Street Journal noted an irony at the center of the ruling:
“The California Supreme Court had applied the same legal premises to hold unconstitutional funding disparities among districts and one district’s decision to end the school year six weeks early owing to a budgetary shortfall. Vergara doesn’t break new legal ground so much as apply precedent in a way that threatens the education establishment. It’s a case of judicial activism coming back to bite the left.”
A permanent job
As noted in Waiting for ‘Superman,’ a documentary promoting educational reform, one out of every 57 doctors loses his or her license to practice medicine, and one of every 97 lawyers loses his or her license to practice law. Yet, in many major cities, only one out of 1,000 teachers is fired for performance-related offenses. The reason is tenure, or as the unions call it, “permanent status.”
Tenure is the practice of guaranteeing a teacher his or her job. Originally, this was a due process guarantee, something intended to work as a check against administrators capriciously firing teachers and replacing them with friends or family members. It was also designed to protect teachers who took political stands the community might disagree with. Tenure as we understand it today was first seen at the university level, where, ideally, professors would work for years and publish many pieces of inspired academic work before being awarded what amounted to a job for life.
At the elementary and high school level, tenure has evolved from the original understanding of “due process” to the university-style “job for life.” In most states, teachers are awarded tenure after only a few years, after which time they become almost impossible to fire. The main function of these laws is to help bad teachers keep their jobs.
►One Los Angeles union representative has said: “If I’m representing them, it’s impossible to get them out. It’s impossible. Unless they commit a lewd act.” Unfortunately for the students who have to learn from these educators, virtually every teacher who works for the Los Angeles Unified School District receives tenure. In a study of its own, the Los Angeles Times reported that fewer than two percent of teachers are denied tenure during the probationary period after being hired. And once they have tenure, there’s no getting rid of them. Between 1995 and 2005, only 112 Los Angeles tenured teachers faced termination—eleven per year—out of 43,000. And that’s in a school district where the high school graduation rate in 2003 was a pathetic 51 percent.
►One New Jersey union representative was even blunter about what his union does to keep bad teachers in the classroom: “I’ve gone in and defended teachers who shouldn’t even be pumping gas.”
In 10 years, only about 47 out of 100,000 teachers were terminated from New Jersey’s schools. Original research conducted by the Center for Union Facts (CUF) has confirmed that almost no teacher is ever fired in Newark, which is New Jersey’s largest school district, no matter how bad a job the teacher does. Over one four-year period, CUF discovered, Newark’s school district successfully fired about one out of every 3,000 tenured teachers annually. This is a city where roughly two-thirds of students never graduate from high school.
►In New York City, the New York Daily News reported that “just 88 out of some 80,000 city schoolteachers have lost their jobs for poor performance” over 2007-2010.
Then there were the so-called “rubber rooms” of New York City, which operated until 2010. Teachers who couldn’t be relieved of duty would report to these “rubber rooms,” where they would be paid to do nothing for weeks, months, even years. According to the New York Daily News, at any given time an average of 700 teachers were being paid not to teach while the district jumped through the hoops, imposed by the union contract and the law, to pursue discipline or termination. (A city teacher in New York who ended up being fired spent an average of 19 months in the disciplinary process.) The Daily News reported that the New York City school district spent more than $65 million annually just to pay the teachers who were accused of wrongdoing. Millions more tax dollars were spent to hire substitutes.
After the embarrassing Daily News story and an exposé in the New Yorker, the union agreed to end the practice of rubber rooms but refused to expedite the dismissal process. Instead of whiling the days away doing nothing, the teachers were assigned to do clerical work and perform other semi-useful tasks.
The problem isn’t limited to teachers accused of wrongdoing. The city spends more than $100 million every year paying teachers who have been excessed (i.e., whose positions have been eliminated) but have yet to find jobs.
According to the Wall Street Journal, the ironclad union contract requires that any teacher with tenure be paid full salary and benefits if he or she is sent to the “Absent Teacher Reserve pool.” The average pay of a teacher in that pool is over $80,000 a year, and some teachers have stayed in the pool for years. The Journal reports that the majority of teachers in the pool had “neither applied for another job in the system nor attended any recruitment fairs in recent months.”
►Things are no better in New York as a whole. The Albany Times Union looked at what was going on statewide outside New York City and discovered some shocking data: Of 132,000 teachers, only 32 were fired for any reason between 2006 and 2011.
►In Chicago, a school system that has by any measure failed its students—only 28.5 percent of 11th graders met or exceeded expectations on that state’s standardized tests—Newsweek reported that only 0.1 percent of teachers were dismissed for performance-related reasons between 2005 and 2008. When barely one in four students nearing graduation can read and do math, how is it possible that only one in one thousand teachers is worthy of dismissal? It may well be that most of the city’s teachers are good teachers, but can 99.9% of them be good?
Effects of tenure and related teacher “protections”
Modeled after labor arrangements in factories, the typical teachers’ union contract is loaded with provisions that do not promote education. These provisions drive away good teachers, protect bad teachers, raise costs, and tie principals’ hands.
● The Dance of the Lemons
One of the more shocking scenes in the documentary Waiting for ‘Superman’ is an animated illustration of “The Dance of the Lemons.” This is no waltz or foxtrot. Rather, it’s the systematic shuffling of incompetent teachers from school to school. These teachers can’t be fired because union contracts require that “excessed” educators, no longer needed at their original school, must be given first crack at new job openings when slots open up elsewhere in the district. Administrators at other schools don’t want to hire these bad teachers, but districts are unable to fire them.
What happens? LA Weekly documented just how this process plays out in Los Angeles in a massive 2010 investigation. “The far larger problem in L.A. is one of ‘performance cases’—the teachers who cannot teach, yet cannot be fired. Their ranks are believed to be sizable—perhaps 1,000 teachers, responsible for 30,000 children. … The Weekly has found, in a five-month investigation, that principals and school district leaders have all but given up dismissing such teachers. In the past decade, LAUSD officials spent $3.5 million trying to fire just seven of the district’s 33,000 teachers for poor classroom performance—and only four were fired, during legal struggles that wore on, on average, for five years each. Two of the three others were paid large settlements, and one was reinstated. The average cost of each battle is $500,000.”
Unintended Consequences, a study by The New Teacher Project (TNTP), documented the damage done by this union-imposed staffing policy. In an extensive survey of five major metropolitan school districts, TNTP found that “40 percent of school-level vacancies, on average, were filled by voluntary transfers or excessed teachers over whom schools had either no choice at all or limited choice.” One principal decried the process as “not about the best-qualified [teacher] but rather satisfying union rules.”
● Thinning the talent pool
One problem related to the destructive transfer system is a hiring process that takes too long and/or starts too late, thanks in part to union contracts. Would-be teachers typically cannot be hired until senior teachers have had their pick of the vacancies, and the transfer process makes principals reluctant to post vacancies at all for fear of having a bad teacher fill it instead of a promising new hire.
In the study Missed Opportunities, The New Teacher Project found that these staffing hurdles help push urban districts’ hiring timelines later to the point that “anywhere from 31 percent to almost 60 percent of applicants withdrew from the hiring process, often to accept jobs with districts that made offers earlier.”
“Of those who withdrew,” the TNTP report continues, “the majority (50 percent to 70 percent) cited the late hiring timeline as a major reason they took other jobs.” It’s the better applicants who are driven away: “Applicants who withdrew from the hiring process had significantly higher undergraduate GPAs, were 40 percent more likely to have a degree in their teaching field, and were significantly more likely to have completed educational coursework” than the teachers who ended up staying around to finally receive job offers.
● Keeping experienced teachers away from poor children
Another common problem with the union contract is a “bumping” policy that fills schools which are more needy (but less desirable to teach in) with greater numbers of inexperienced teachers. In its report Teaching Inequality, the Education Trust noted: “Children in the highest-poverty schools are assigned to novice teachers almost twice as often as children in low-poverty schools. Similarly, students in high-minority schools are assigned to novice teachers at twice the rate as students in schools without many minority students.”
● Bad apples stay
A study conducted by Public Agenda polled 1,345 schoolteachers on a variety of education issues, including the role that tenure played in their schools. When asked “does tenure mean that a teacher has worked hard and proved themselves to be very good at what they do?” 58 percent of the teachers polled answered that no, tenure “does not necessarily” mean that. In a related question, 78 percent said a few (or more) teachers in their schools “fail to do a good job and are simply going through the motions.”
When Terry Moe, the author of Special Interest: Teachers Unions and America’s Public Schools, asked teachers what they thought of tenure, they admitted that the byzantine process of firing bad apples was too time-consuming: 55 percent of teachers, and 47 percent of union members, answered yes when asked “Do you think tenure and teacher organizations make it too difficult to weed out mediocre and incompetent teachers?”
● The union tax on firing bad teachers
So why don’t districts try to terminate more of their poor performers? The sad answer is that their chance of prevailing is vanishingly small. Teachers unions have ensured that even with a victory, the process is prohibitively expensive and time-consuming. In the 2006-2007 school year, for example, New York City fired only 10 of its 55,000 tenured teachers, or 0.018%. The cost to eliminate those employees averages out to $163,142, according to Education Week. The Albany Times Union reports that the average process for firing a teacher in New York state outside of New York City proper lasts 502 days and costs more than $216,000. In Illinois, Scott Reeder of the Small Newspaper Group found it costs an average of $219,504 in legal fees alone to move a termination case past all the union-supported hurdles. In Columbus, Ohio, the teachers’ union president admitted to the Associated Press that firing a tenured teacher can cost as much as $50,000. A spokesman for Idaho school administrators told local press that districts have been known to spend “$100,000 or $200,000” in litigation costs to toss out a bad teacher.
It’s difficult even to entice the unions to give up tenure for more money. In Washington, D.C., school chancellor Michelle Rhee proposed a voluntary two-tier track for teachers. On one tier, teachers could simply do nothing: Maintain their regularly scheduled raises and keep their tenure. On the other track, teachers could give up tenure and be paid according to how well they and their students performed, with the potential to earn as much as $140,000 per year. The union wouldn’t even let that proposal come up for a vote among its members, and stubbornly blocked efforts to ratify a new contract for more than three years. When the contract finally did come up for ratification by the rank and file, the two-tier plan wasn’t even an option.
● Taking money from good teachers to give to bad teachers
During the expansion of teacher collective bargaining in the mid-twentieth century, economists from Harvard and the Australian National University found, the average, inflation-adjusted salary for U.S. teachers rose modestly—while “the range of the [pay] scale narrowed sharply.” Measuring aptitude by the quality of the college a teacher attended, the researchers found that the advent of the collectively bargained union contract for teachers meant that on average, more talented teachers were receiving less, while less talented teachers were receiving more.
The earnings of teachers in the lowest aptitude group (those from the bottom-tier colleges) rose dramatically relative to the average wage, so that teachers who in 1963 earned 73 percent of the average salary for teachers could expect to earn exactly the average by 2000. Meanwhile, the ratio of the earnings of teachers in the highest-aptitude group to earnings of average teachers fell dramatically. In states where the highest-aptitude teachers began with an earnings ratio of 157 percent, they ended with a ratio of 98 percent.
Data from the National Center for Education Statistics, as reported by Education Week, add further evidence to the compressed-pay claim. The Center’s stats indicate that the average maximum teacher pay nationwide is only 1.85 times greater than the nationwide average salary for new teachers.
● Locking up education dollars
Much of the money commanded by teachers’ union contracts is not being used well, at least from the perspective of parents or reformers. Several provisions commonly found in union contracts that cost serious money have been shown to do little to improve education quality. A report from the nonprofit Education Sector found that nearly 19 percent of all public education spending in America goes towards things like seniority-based pay increases and outsized benefits—things that don’t go unappreciated by teachers, but don’t do much to improve the quality of teaching children receive. If these provisions were done away with, the report found, $77 billion in education money would be freed up for initiatives that could actually improve learning, like paying high-performing teachers more money.
● Putting kids at risk
Teachers unions push for contracts that effectively cripple school districts’ ability to monitor teachers for dangerous behavior. In one case, school administrators in Seattle received at least 30 warnings that a fifth grade teacher was a danger to his students. However, thanks to a union contract that forces schools to destroy most personnel records after each school year, he managed to evade punishment for nearly 20 years, until he was finally sent to prison in 2005 for having molested as many as 13 girls. As an attorney for one of the victims put it, according to the Seattle Times, “You could basically have a pedophile in your midst and not know it. How are you going to get rid of somebody if you don’t know what they did in the past?”
The Bottom Line
Too many schools are failing too many children. Americans should not remain complacent about how districts staff, assign, and compensate teachers. And too many teachers’ union contracts preserve archaic employment rules that have nothing to do with serving children.
Even Al Shanker, the legendary former president of the American Federation of Teachers, admitted, “a lot of people who have been hired as teachers are basically not competent.”
This is what the union wants: To keep teachers on the payroll regardless of whether or not they are doing any work or are needed by the school district. Why? As long as they are on the payroll, they keep paying union dues. The union doesn’t care about the children who will be hurt by this misallocation of tax dollars. All union leaders care about is protecting their members and, by extension, their own coffers.
Most teachers absolutely deserve to keep their jobs, and some have begun to speak out about the absurdity of teacher tenure, but it’s impossible to pretend that the number of firings actually reflects the number of bad teachers protected by tenure. As long as union leaders possess the legal ability to drag out termination proceedings for months or even years—during which time districts must continue paying teachers, and substitute teachers to replace them, and lawyers to arbitrate the proceedings—the situation for students will not improve.
The Vergara case offers hope, but supporters of better education cannot rely on judges to fix America’s schools. Parents and teachers must join together to eliminate teacher tenure systems that protect bad teachers and that divert our best teachers away from many of the students who could benefit most from their skills and experience.
* * *
About the Author: Richard Berman is executive director of the Center for Union Facts. Some of this material appeared previously on the website TeachersUnionExposed.com, a project of the Center for Union Facts. This article originally appeared on the website Labor Watch, and is republished here with permission. | <urn:uuid:76946d5a-8567-4fb4-8efb-e79a61d340ab> | CC-MAIN-2017-39 | http://unionwatch.org/californias-vergara-ruling-a-bad-day-for-bad-teachers/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689806.55/warc/CC-MAIN-20170923231842-20170924011842-00276.warc.gz | en | 0.971533 | 5,699 | 3.265625 | 3 | {
"raw_score": 3.0449960231781006,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Education & Jobs |
Awgen, Mar (fl. 4th – early 5th cent.)
Mar Awgen is the traditional founder of monasticism in Mesopotamia. The two primary sources for his life are an anonymous ‘Life of Mar Awgen’ and a brief account preserved in the ‘Book of Chastity’, composed by Ishoʿdnaḥ of Baṣra (9th cent.). He receives an earlier mention in Dadishoʿ Qaṭraya, in the 7th cent. Mar Awgen was an Egyptian, born in the time of Constantine. He worked for twenty-five years as a pearl diver, but then left this occupation to join the nascent monastic movement, and he became a disciple of Pachomius, the famous Egyptian monastic leader. After spending some time in the monastery of Pachomius, Mar Awgen travelled to Nisibis and founded a monastery on nearby Mt. Izla. His seventy disciples who followed him from Egypt were the reputed founders of other monasteries in various parts of Mesopotamia and other Syriac-speaking lands. Mar Awgen himself is said to have performed numerous miracles in the presence of Shapur, the King of Persia. It has also been reported that his remains and those of certain of his disciples were brought to the Monastery of Mar Ḥananya (Dayr al-Zaʿfarān). Mar Awgen is held in the highest esteem in all the eastern churches with Syriac roots.
Scholars have now shown that this traditional account does not have a firm historical foundation and has served primarily to obfuscate the native origins of Syro-Mesopotamian monasticism in favor of reputed Egyptian origins. These accounts of Mar Awgen have been shown to be of late origin, anachronistic, and extremely divergent on important details, e.g., the number of his disciples ranges from eighteen to seventy, and a number of monks named in these accounts actually lived as late as the 7th and even 10th cent. Attempts to connect Mar Awgen to the Aōnes mentioned in Sozomen’s Ecclesiastical History (VI.33) have failed, and it is even possible, as some scholars have surmised, that there was, in fact, no such historical person as Mar Awgen.
- P. Bedjan, Acta Martyrum et Sanctorum, vol. 3 (1890–97), 376–480.
- J.-B. Chabot, Livre de la Chasteté (1896).
- S. Chialà, Abramo di Kashkar e la sua comunità (2005), 13–20.
- J.-M. Fiey, ‘Aônês, Awun et Awgin’, AB 80 (1962), 52–81.
- J.-M. Fiey, Jalons (CSCO 310), 100–11.
- J.-M. Fiey, Saints syriaques (2004), 40–1.
- Labourt, Le christianisme dans l’empire perse, 300–15.
- A. Scher, Histoire nestorienne inédite (Chronique de Séert), Première partie (PO 4; 1907).
- N. Sims-Williams, ‘Eugenius (Mar Awgen)’, EIr , vol. 9 (1999), 64.
How to Cite This Entry
Footnote Style Citation with Date:
Edward G. Mathews, Jr., “Awgen, Mar,” in Gorgias Encyclopedic Dictionary of the Syriac Heritage: Electronic Edition, edited by Sebastian P. Brock, Aaron M. Butts, George A. Kiraz and Lucas Van Rompay (Gorgias Press, 2011; online ed. Beth Mardutho, 2018), https://gedsh.bethmardutho.org/Awgen-Mar.
Bibliography Entry Citation:
Mathews, Edward G.Jr. “Awgen, Mar.” In Gorgias Encyclopedic Dictionary of the Syriac Heritage: Electronic Edition. Edited by Sebastian P. Brock, Aaron M. Butts, George A. Kiraz and Lucas Van Rompay. Digital edition prepared by David Michelson, Ute Possekel, and Daniel L. Schwartz. Gorgias Press, 2011; online ed. Beth Mardutho, 2018. https://gedsh.bethmardutho.org/Awgen-Mar.
A TEI-XML record with complete metadata is available at https://gedsh.bethmardutho.org/Awgen-Mar/tei. | <urn:uuid:c0baa946-439a-4703-95b2-444977dc6de8> | CC-MAIN-2020-40 | https://gedsh.bethmardutho.org/Awgen-Mar | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00728.warc.gz | en | 0.897983 | 1,013 | 2.828125 | 3 | {
"raw_score": 2.9313199520111084,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Religion |
John D. UptonOctober 15, 2001
The Matching Ronchi Test for telescope mirrors is very easy to set up and use. Once you understand the basic principals involved, you can use this test to help figure your mirror. Even beginners can quickly learn to visually interpret what the test is telling them about the shape of their mirror. This article is intended to give you the background you need to begin using the Matching Ronchi Test.
The Ronchi Test for telescope mirrors is perhaps one of the easiest optical tests to learn and use. Unlike the more common Foucault Test, the beginner will have very little trouble seeing something the very first time they try the Ronchi Test on their mirror. The Ronchi Test can be applied to an optical system at focus or to a mirror on the workbench at the center of curvature. This article covers the latter use.
The Ronchi Test was first described by Vasco Ronchi in 1923. In essence, it is much like the more commonly used Foucault test except that the knife edge is replaced by a grating consisting of fine, opaque, equally spaced lines ruled onto a transparent substrate. The shadows (actually silhouettes) of these fine lines of the grating appear projected onto the face of the mirror under test. The shape and position of these bands is examined and interpreted to give information about the shape of the mirror's surface.
Unlike the Foucault Test, the Ronchi Test is inherently qualitative. It will not allow you to directly assign a wavefront criteria to the errors you see on your mirror. In fact, simply looking at the curvature of the grating's bands projected onto the mirror alone is not sufficient to determine anything about the mirror's shape, much less determining whether the mirror is good or bad. The reason is that for any given band shape, there are many possible mirrors of differing surface profiles and parameters that will appear to match.
This does not mean the Ronchi Test is useless, however. Far from it, this qualitative test is very quick and can provide much useful information about your mirror at a glance. When used in the manner described in this article, the Ronchi Test can be used as a semi-quantitative test. As such, it can allow you to assign an approximate upper bound on the accuracy of your mirror.
The key to effectively using the Ronchi Test is the matching of a complete set of parameters to your mirror. The bands seen on your mirror are compared or matched with a set of patterns from a perfect mirror. The perfect mirror used for matching is virtual in nature. It's theoretical appearance is calculated and displayed on a computer screen or paper printout. Your mirror's appearance must match the virtual mirror when set up in a specific manner. The exact procedure will be described below, but first, let's look at the Ronchi Test in more detail.
Figure 1 at right shows a schematic representation of the Ronchi Test. A light source is placed near the center of curvature (COC) of the mirror. A grating having from 50 to 250 lines per inch is then also placed near the center of curvature in the returning beam. The light source may consist of either a pinhole or a narrow slit. If a slit is used, maximum contrast and visibility of the bands occurs when the slit and grating lines are oriented precisely parallel to one another. A simplified version of the test setup is to place the Ronchi grating over both the source and the observation point. The grating itself is used to form one or more slits over the light source depending on size. This set up has the advantage of ensuring that the slit(s) are exactly parallel to the grating.
Now, the observer looks through the grating at the mirror. If the tester is near the center of curvature, one or more lines will appear projected onto the face of the mirror. The closer the tester is the COC, the fewer the number of lines that will appear. The shape of the Ronchi bands seen on the mirror vary with the shape of the mirror. They may be either straight or constricted at either the top and bottom or in the center.
See Figure 2 at right for an example. This is an actual Ronchigram image of an 8" F/6.7 mirror. The Ronchigram was captured with an inexpensive Web camera placed at the same position from which the user would view the mirror. Note the two distinct overlapping images. You can see the effect best at the left and right edges of the mirror. The dual images are a diffraction effect caused by the Ronchi grating itself. Depending on the grating and mirror parameters, more overlapping images may be seen. In most cases, these should not cause you many problems. The appearance can be simulated using the Diffract program listed near the bottom of this page. The only thing to keep in mind is that the multiple images will often give the impression of turned edge on the mirror since the edge is shown several times offset laterally from itself. Carefully examine the image at the very edge and mentally "subtract out" the second image in order to properly evaluate the extreme edge of the mirror.
In using the Ronchi Test, it is neither useful nor sufficient to simply compare your mirror to a "standard" set of Ronchi patterns. There are no standard Ronchi pattern shapes. The appearance of the Ronchi grating's shadow bands depends strongly on the diameter, focal length, and surface profile of your mirror, the placement of the grating with respect to the mirror's center of curvature, and the number of lines per unit width of the grating. All of these factors are used in calculating the theoretical Ronchi patterns. You should take care to use accurate measurements for your mirror. Matching at a single point to a pattern on your mirror tells you little about the shape of the mirror. You must use accurate Ronchi simulation parameters and carefully perform the complete test as outlined below in order to adequately judge your mirror's figure.
In order to properly run the Matching Ronchi Test, you must first make some preliminary preparations. The first thing to do is to generate a set of Ronchi band patterns specific to your mirror. At the bottom of this article, you will find a list of Ronchi simulation programs you may use. (If you are somewhat ambitious and would like to write your own Ronchi simulator, you may refer to my article on how to perform Ronchi simulation.)
Using the simulator you have chosen, generate a series of images for different grating positions. Each grating position corresponds to a different distance (called an offset) of the grating from the radius of curvature of your mirror. (By convention, the Radius of Curvature of a non-spherical optical component is defined as the radius of curvature of a tiny central area nearest the optical axis. This is also called the paraxial radius of curvature.) You should use at least three grating offsets for your comparisons. Twice that is better. Several of the Ronchi simulation tools listed later can generate a mosaic of six images at different offsets. You should choose two or three offsets inside the radius of curvature (nearer the mirror) and three or four outside the radius of curvature (away from the mirror). I like to choose the innermost offset such that four or five Ronchi bands appear on the mirror. I prefer the outermost offset to show seven or eight Ronchi bands. (I find that too many bands just confuses the comparison.) The remaining offsets are chosen between these limits. Enter you mirror's specifications into the simulation program. Allow the simulator to generate the images at the offsets you have chosen. Finally, print out the images or set up the tester near the computer (or the computer near the tester).
Now, it is time to set up the Ronchi tester. Begin by making sure that the grating is in place and you are using an appropriate light source. A small pinhole probably works best but is not necessarily the easiest nor brightest source. A slit of approximately the same width as one of the gaps (clear areas) in the Ronchi grating will deliver more light than a pinhole. If using a slit for the light source, take care to ensure that it is very nearly parallel to the grating's lines. If the slit is not parallel to the grating, the resulting Ronchi bands on the mirror will be of low contrast and harder to see. It often easiest to just place a relatively large grating over the source allowing it to extend far enough away from the light source that you can also view the mirror through it.
With the tester ready to use, we now set up the tester and mirror. Place the mirror on your test stand. Place the tester on a stable stand at a distance from the mirror approximately equal to twice the focal length. (Twice the focal length of the mirror is the Radius of Curvature.) Adjust the longitudinal position of your tester's moving stage to about one third of the way from its innermost limit of travel. Now use a tape measure and carefully move the whole tester back and forth until the distance between the light source and the mirror's surface is equal to twice your focal length.
The final step is to align the tester and mirror stand so that the image of the light source may be viewed through the grating. Once the tester and mirror are aligned, you should be able to see one or two Ronchi bands on the surface of your mirror. You may want to nudge the tester slightly to one side or another to center a line or gap at the center of the mirror. The only reason here is to better match the simulated images you have generated to make your comparisons.
The first step in running the Matching Ronchi Test is to locate the tester at a known offset with respect to the central zone's center of curvature. I usually use one of the two following methods. They basically work the same and there is no reason I choose one over the other for a session other than just to be different as the mood fits. For both methods, it is best that you have a smooth figure of revolution that looks at least something like a conic pattern with the Foucault Test. Otherwise, a careful measured test may give misleading information.
The two methods of finding a starting point for running the Ronchi test can summarized as follows.
Method 1 Summary
Method 2 Summary
Look through the Ronchi grating and adjust your tester so that the pattern on you mirror matches as closely as possible the computer simulated image for the center of curvature (grating offset of 0.000). Match as closely as you can based on the number of bands visible and their placement rather than their curvature. (As you near completion of the mirror, you should match on curvature in addition to the number and zonal crossing position of the bands.) Use this matching point as the assumed radius of curvature and adjust your tester to zero or write down the current reading so you can add it to all other simulated offsets as you run the test.
For Example, if your tester read 0.236" when the 0.000" (radius of curvature) image matched as best as possible, then move the tester to 0.336" for the 0.100" image, to 0.736" for the 0.500" image, etc. For the inside radius of curvature images, you would set the tester to 0.036" (-0.200 + 0.236) for the -0.200" reading, etc.
Set up the tester and adjust its position to match one of the simulated zonal images you are using. Again, initially match the number of displayed bands (and their zonal crossing positions as you get close to the target paraboloid) rather than their curvature. Write down the tester reading at the point of best match. Subtract (remembering that inside ROC image offsets are negative) the offset of the simulated image you are matching. Write the result down. You will add this number to all your image offsets to get the required tester position.
For Example, if the innermost image you initially matched was at a simulated offset of -0.300" and your tester reading was 0.189", then first write down 0.189". Now subtract -0.300" and write down (0.189 - -0.300 =) 0.489". This number is then added to each of the offset numbers to tell you where to place the tester. In this example, for the -0.100" image you would move the tester to a reading of 0.389", for the 0.300" image, the tester goes to 0.789" and for the 0.500" image, the tester needs to be moved to 0.989".
Running the test as described above is actually simple and keeping track of the numbers isn't too bad -- you just have to stay focused on procedures. It becomes downright simple if you have a way to reset your tester reading mechanism to a known value after matching the initial image. Then all readings can be made directly from the tester's micrometer or dial. As an alternative, if you have easy access to a computer nearby, you can use a spreadsheet program to do the offset calculations for you. I have such a spreadsheet which you may download. Refer to the links at the bottom of this article.
As you near completion of figuring, it helps to have images of slightly over and under corrected mirrors available so that you can see just how close you are getting. As you look at the mirror, you can compare it to the perfect mirror image as well as mirrors of say 1/8 wave over and under corrected. Using the Matching Ronchi Test in this way will allow you to get quite close to your target.
Interpretation of Ronchi Test grating band patterns is governed by a single rule. The general rule of Ronchi testing is this:
"The bands in the Ronchi pattern for any zone become closer together the farther you move from that zone's Center of Curvature. Conversely, the bands in the Ronchi pattern become farther apart the closer you move toward that zone's Center of Curvature."
The reasoning for this rule is very simple. Refer to Figure 3a at the right. Let's assume we have a spherical mirror with the light source at the center of curvature. After striking the mirror and returning, the cone of the converging ray bundle gets smaller as it approaches its focal point at the center of curvature. When we place a grating in this converging cone, the smaller the cone at the point of insertion, the fewer the number of bands on the grating that cut into the light cone. It is the silhouette of these bands that we see on the face of the mirror during the Ronchi Test.
As you move the grating away from the center of curvature in either direction, the cone is larger and the number of bands intersecting the returning beam is increased. Thus it is easy to see why we will always have an increasing number of bands showing on the mirror as we move the grating away from the center of curvature. It is also easy to see that whenever more bands are showing on the face of the mirror, they must appear closer together.
Now, let's extend this concept to a non spherical mirror. As you already know, when tested at the center of curvature, a non-spherical mirror shows spherical aberration -- each annular radial area (or zone) on the mirror's surface focuses its light at a different point along the optical axis. This occurs because the radius of curvature of each radial zone is different. Refer to Figure 3b at the far right.
Here we see the area near the center of curvature of a paraboloidal mirror. The light rays from zones nearer the center of the mirror cross the optical axis closer to the mirror than those from zones nearer the edge. (The edge has a longer radius of curvature than the center.) Let's look at two grating positions in this area of the converging light cone. The gratings are depicted from the top cross sectional view. We see only the ends of the grating's lines depicted as dots. For a grating placed just inside the convergence region of the depicted central zones, we see that few lines will appear near the center of the mirror, yet the grating has cut ten lines of the grating since we are in a relatively large section of the light cone. We would then expect to see ten lines across the mirror with the innermost lines relatively far apart and the outermost lines more crowded together. The bands would bow inward as we look at zones starting in the center of the mirror and then scan up to the top edge of the mirror.
Now, let's examine the second grating position just inside of the convergence region of the depicted outer zones. Here we see that the grating has cut into a smaller portion of the aberrated light cone and will show only four bands. The central rays are diverging here and will show all four bands while the edge zones will show only show a couple of bands. This means that we will see more bands at the center of the mirror and fewer as we scan up from the center to the top edge. The bands are bowing outward at this grating position.
These two examples should show why the curvature of the Ronchi lines occurs. In essence, Ronchi testing consists of examining the mirror one zone at a time from center to top edge applying the Ronchi Testing rule. Look at the number of bands showing on the face of the mirror for each zone from center to top edge. Also look at the points at which the bands cross the horizontal center line of the mirror. Finally, you should pay attention to the rate of curvature of the bands as you scan them from center to top edge. All of these characteristics play a roll in determining the shape of the curve on your mirror.
To summarize, if the Ronchi bands get closer together, in other words they "bend" towards the vertical center line of the mirror as they approach the top edge, then you are farther from the edge's center of curvature than the center's center of curvature. If on the other hand, they get farther apart or "bend" outward as they approach the top edge, then you are closer to the edge's center of curvature than the center's center of curvature.
Let's next apply this principle to see how it works. Refer to Figure 4 at left. What the Ronchi interpretation rule means is that, when using the test, you should look at the very center of the mirror under test and note the separation of two bands that are equidistant from the vertical centerline of the mirror. Now, scan your eye upward towards the 12 o'clock edge position and note whether the grating's bands get closer together or farther apart as they approach the top edge of the mirror.
In our example here, we see that the lines get farther apart as we approach the top of the mirror. Applying our general rule above, this means that the we are closer to the center of curvature of the edge zones of this mirror than we are to its central zones' center of curvature. Note that until we are told where the grating has been placed, we cannot yet say anything about this mirror's actual shape. We can only say with confidence that the grating has been placed closer to the center of curvature of the edge zones than that of the central zones.
If we are now told that the grating has been placed inside the center of curvature of the mirror, we can now begin to guess what shape it has. Since we know the grating is closer to the edge zones and the grating is inside the center of curvature, we infer that the mirror is an oblate ellipsoid of some sort. An oblate ellipsoid is the shape of a mirror whose edge focuses at a shorter distance than its center. (It is an under-corrected sphere.) Also note that we cannot yet say anything about how under-corrected this mirror is. We need to perform the full test in order to deduce that information.
Let's look at some other more specific examples of this phenomenon. In the examples that follow, we will look at a series of simulated Ronchi patterns for grating placements of -0.192", -0.055", 0.082", 0.219", 0.356", and 0.493" from the center of curvature of the central zone. Negative grating offsets indicate placement inside the radius of curvature while positive values indicate placement beyond the radius of curvature. The simulated mirror is a rather ordinary 8" F/7.
Example 1: A perfect sphere (Figure 5 on the left): (All zones have the same radius of curvature.)
At every point you place the grating (whether inside or outside the radius of curvature), it is always equidistant from every zone's center of curvature, so the bands don't bend at all as you visually scan up the mirror from center to the top edge. All the Ronchi bands are straight.
Example 2: A perfect parabola (Figure 6 right): (The edge has a longer radius of curvature than the center.)
Well inside of the radius of curvature, the grating is closer to the center's center of curvature than the edge's. In this case, the bands bend towards the vertical center line of the mirror as they approach the top edge. Well outside the radius of curvature, the grating is closer to the edge's center of curvature than the center's, so the bands bend away from the vertical center line of the mirror.
Example 3: An oblate ellipsoid (gross under-correction)(Figure 7 far left): (The radius of curvature of the edge is shorter than the radius of curvature of the center.)
Well inside of the radius of curvature, the grating is closer to the edge's center of curvature than the center's. In this case, the bands bend away from the center line of the mirror as they approach the upper edge. Well outside the radius of curvature, the grating is closer to the center's center of curvature than the edge's, so the bands bend towards the center line of the mirror as they approach the top edge.
Example 4: A perfect sphere with turned down edge (TDE)(Figure 8 left): (The radius of curvature of the edge is much longer than the radius of curvature of the rest of the mirror.)
Most of the surface has straight bands regardless of where you put the grating. Inside of the radius of curvature, the grating is very far from the edge's center of curvature, so the bands appear straight until they reach the bad zone and then they hook inward sharply. Outside the radius of curvature, the grating will be closer to the edge's center of curvature than the center's, so the bands will be straight over most of the mirror and then hook sharply outward as they reach the edge.
Example 5: A paraboloid with turned down edge (TDE) (Figure 9 far left): (The radius of curvature of the edge is much longer than is normal for a paraboloidal mirror.)
The Ronchi bands are curved much the same as a normal paraboloidal mirror but their curvature increases near the edge zones of the mirror. Compare this mirror to that depicted in Figure 6 above. Inside of the radius of curvature, the grating is very far from the edge's center of curvature, so the bands gently curve inward until they reach the bad zone and then they hook inward sharply. Outside the radius of curvature, the grating will be closer to the edge's center of curvature than the center's, so the bands will gently curve outward over most of the mirror and then hook sharply outward as they reach the bad edge zone. Note that the effects of the bad edge are most apparent outside the radius of curvature.
Example 6: A hyperboloid -- an over-corrected paraboloid (Figure 10 left): (The radius of curvature of the edge is longer than for a paraboloidal mirror.)
The Ronchi bands are curved more than a normal paraboloidal mirror. Their curvature increases rapidly near the edge zones of the mirror. Inside of the radius of curvature, the grating is far from the edge's center of curvature, so the bands curve inward more than for a paraboloidal mirror. Outside the radius of curvature, the grating will be closer to the edge's center of curvature than the center's, so the bands will curve outward to a greater degree than the normal paraboloidal mirror.
Note that the points at which the Ronchi bands cross the horizontal center axis of the mirror are also misplaced. Compare this mirror to that in Figure 6 above. It can be seen that the grating position would have to be moved farther back in order to make the number of bands seen more resemble that of the paraboloidal mirror in the bottom row of images in Figure 6. Even then, their curvature would be seen to differ. This is characteristic of over and under correction on the mirror. In general, when working outside the center of curvature, if you reach the matching number of bands too soon while moving the grating outward, the mirror is under-corrected. If you reach the matching number of bands too late, i.e. too far away, the mirror is over-corrected. Inside the center of curvature, these conditions are reversed. If you reach the approximate matching position too soon while moving the grating inward, your mirror is over-corrected, while reaching an approximate match too late indicates under-correction.
In addition to helping you figure your mirror, the Ronchi Test is excellent at showing the presence of zones on the mirror. If high or low zones are present on the mirror, the normally smoothly flowing nature of the Ronchi bands is disrupted. There will be irregularities in the shape of the bands when zonal problems are present. You can use the Ronchi interpretation rules as explained above to diagnose any zonal defects on the mirror. For instance, outside the radius of curvature, an area that causes a local outward bowing of the bands is caused by a longer focussing (high) zone.
The Ronchi Test is also a useful tool for examining the surface smoothness of your mirror. The best way to look for indications of a rough surface is to carefully examine the edges of the Ronchi bands. "Dog biscuit," or large scale roughness is very apparent as irregular edges and randomly varying widths on the bands. You will have to "look through" any air turbulence while testing to see the irregularities in the bands's edges, but carefull examination can be quite revealing. While the Foucault test is generally more sensitive to roughness, the Ronchi test can still be used as a very effective initial assessment.
Finally, the Ronchi Test can also be used to give you a very good idea of the degree of correction of your mirror. You can use the test to assign an upper bound to the accuracy of your mirror's figure. All that is required is to compare your mirror to not only a perfect virtual mirror but also to a mirror with a known amount of wavefront error. To do this, you will need to find the conic constant for a mirror similar to yours that deviates from perfect by the amount of wavefront error you specify.
Several of the simulation programs listed below will calculate the appropriate conic constant for you. My program, Ronchi for Windows, will also draw the over and under corrected images for you. For each offset you specify, three images are drawn -- one under-corrected, one perfect, and one over-corrected. This allows direct comparisons to your mirror. Several of the other programs listed will report the conic constant which corresponds to a given wavefront error. You may then use that conic constant in place of your mirror's normal conic to simulate the Ronchi images at each of your offsets. You should generate a set of under-corrected and over-corrected image mosaics for use in your comparisons in addition to the normal set which represent a perfect mirror.
For each offset, compare your mirror to each of the simulated images. Try to note which of the images for a given offset more closely matches your mirror. You can plan your next figuring session based on whether the mirror appears to be mostly under-corrected or over-corrected as a whole. As you get closer to being finished, tighten the wavefront criteria you use to generate the comparison images. Print out this new image set. These resulting new images will be much more similar than before. When you have reached a point that you can no longer easily distinguish between the three images at any offset and your mirror differs little from the images, you may wish to begin using a higher line density (more lines per unit width) grating. The higher line density grating will provide more sensitivity allowing you to more easily discern the subtle differences for the smaller wavefront error allowance. When you have reached a wavefront criteria that you consider "good enough", it is time to perform a final verification test on your mirror and send it off for aluminizing.
When you are satisfied that your mirror is as good as you wish when tested with the Matching Ronchi Test, you should test it using some other method before declaring it complete. You may use any other test you are familiar with. Since you have a Ronchi Tester already, because of its similarity to a Foucault tester, you can perform that test by replacing the grating with a knife edge. Your newly gained experience with using the Ronchi test may also translate to a shorter learning period with the Foucault test. While using the Foucault test, be sure to examine the mirror's surface for roughness and "dog biscuit". Hopefully, if everything has gone well, your Foucault test results will agree with those you obtained from the Ronchi test. If the results are significantly different, go back and evaluate your procedures for both tests. Resolve the discrepancies and then rerun the tests. If the results do agree, consider this mirror complete.
Instead of using the Foucault test for your final verification, you may wish to use the star test. (It is entirely proper and helpful to use the star test as a final verification even if you have already run the Foucault test with satisfactory results.) The star test is extremely sensitive and powerful. If you are not familiar with star testing, please refer to Harold Suiter's excellent book "Star Testing Astronomical Telescopes" for details. After running the Ronchi test, the Foucault test, and the star test, you should have a very clear picture of the condition of your mirror. If your mirror has passed these tests and shows the classic smooth flowing lines in the Ronchi test, views through the eyepiece should be extremely rewarding. Send the mirror off, congratulate yourself, and prepare for years of enjoyment with the mirror you have just completed.
In addition to this article, there are other sources of information and tools on the World Wide Web to help you get started using the Matching Ronchi Test. I suggest that you check out the following information links. In addition to the Web Pages, I have also included links to software tools that can simulate the proper Ronchi Test patterns for you to use in your testing. The lists below are by no means exhaustive, they just represent the resources of which I am aware.
Ronchi Testing Web Pages:
Ronchi Testing Software Tools:
Making Ronchi Gratings At Home:
Hopefully, this introduction to the Matching Ronchi Test will be sufficient to get you started. While this description may have at times seemed very complex, actually performing the test is quite easy. Try it out. Once you have run through the procedure and seen the Ronchi patterns for yourself, the mystery will be quickly dispelled. Many amateurs have come to use only the Matching Ronchi Test combined with the star test to completely figure their mirrors. This is a very powerful combination. The only possible drawback is that the test will not provide you with a specific "bragging rights" wavefront error you can quote. Don't worry. Rather than quoting an accuracy number to your observing friends, show them how well your mirror performs under the night sky and just quietly smile.
─ ─ ─ ─ ─ ─ ─ | <urn:uuid:f57c2139-c4b4-430e-a726-ffe9fbfcfdd1> | CC-MAIN-2023-23 | http://atm-workshop.com/ronchi-test.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652116.60/warc/CC-MAIN-20230605121635-20230605151635-00382.warc.gz | en | 0.928454 | 6,762 | 3.734375 | 4 | {
"raw_score": 2.6886351108551025,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Ridding the Arctic of the World’s Dirtiest Fuel
We live in remarkable, stressful times. The last five years have been recorded as the five hottest on record, worldwide. Reports are coming in that an increase of 1.5 degrees Celsius may come even sooner than we thought. Worrying news is reaching us from the Arctic and Antarctic ice caps, and from the Himalayas, about increased melting of glacier ice. Urban pollution from transport and trade is threatening our children’s health. In the shipping world, truth is emerging about the climate impacts of not only global trade, but the health to cruise ship passengers and crews from fossil fuel pollution, and the dodges used by the cruise industry to get away with dirty practices. It seems impossible to “get away from it all”, when the air on board your ship is filthier than the world’s most polluted cities.
Yet in response, some good things are happening. We’re witnessing unequivocal calls to action from younger generations, led by Greta Thunberg, a Swedish schoolgirl who is not only inspiring thousands of children to stage climate strikes in many countries, but who is re-igniting a fire in those who have been striving for positive change on environmental issues. While national governments dither, cities, companies are developing ways to confront the biggest challenge humanity has ever faced.
The organisation I work for, the Clean Arctic Alliance, which campaigns for a ban on the world’s dirtiest fossil fuel from the Arctic, is noticing leaps of faith and innovation from some leading players in the global shipping industry. This is notable for an industry that traditionally tends to move at a near glacial pace, and happy to stay quiet during international climate negotiations.
But like the glaciers, the shipping sector is changing, and changing fast. It has gone from the tentative floating of zero emission concepts, to full speed ahead, with some commentators referring to a “gold rush” mentality in the quest for cleaner shipping fuels. Perhaps shipping industry leaders are finally taking inspiration from Lampedusa’s The Leopard: “if we want things to stay as they are, things will have to change” – and could end up demanding more progressive regulations from policymakers.
Last week, it was reported that Sweden’s shipping sector is preparing to end the use of fossil fuels domestically by 2045. This follows recent claims from shipping giant Maersk that it is aiming for a zero co2 emission target by 2050 – followed by Engine manufacturer MAN Energy Solutions saying this is “technically possible”, but citing a 2030 as a target. Elsewhere, Norway’s Arctic cruise operator Hurtigruten is building hybrid vessels, and the country is experimenting with hydrogen-powered ferries. Stena Line, one of the largest ferry operators, has been experimenting with methanol and battery-powered propulsion. Iceland is due to get its first battery-powered car ferry. Back in October 2018, The Economist reported that “Wind-powered ships are making a comeback”.
This week, shipping specialists from around the world will shutter themselves in the International Maritime Organization’s central London headquarters to thrash out a number of issues surrounding the threat of pollution to the climate and oceans from the global shipping industry – an industry that for most of us remains unseen, but which we depend on for bringing us stuff from all over the planet. At this meeting, the elegantly titled “PPR6”, delegations will be tasked with designing a ban on the use and carriage of heavy fuel oil as fuel, from Arctic waters, and the identification of measures which will reduce emissions of black carbon from the burning of fossil fuels.
Heavy fuel oil – known as HFO – is a dirty and polluting fossil fuel that powers ships throughout our seas and oceans – accounting for 80% of marine fuel used worldwide. Around 75% of marine fuel currently carried in the Arctic is HFO; over half by vessels flagged to non-Arctic states – countries that have little if any connection to the Arctic.
As the Arctic warms, the sea ice melts and opens up new shipping channels, larger cargo vessels, such as container ships carrying consumer goods – fuelled by HFO – could divert to Arctic waters in search of shorter journey times, as an alternative to the Suez Canal and the Straits of Malacca. In September 2018, the first commercial container ship – owned by Maersk – crossed the Northern Sea Route, from Vladivostok to Bremerhaven.
While this crossing was something of an experiment – and Maersk maintained that it did not use HFO, a full commercial rollout of container ships crossing the Arctic would greatly increase the volumes of black carbon emitted in Arctic waters, as well as the risks of HFO spills.
Already banned in Antarctic waters and in the waters around Svalbard, HFO is also a greater source of harmful emissions of air pollutants, such as sulphur oxide, and particulate matter, including black carbon, than alternative fuels such as distillate fuel and liquefied natural gas (LNG). When emitted and deposited on Arctic snow or ice, the climate warming effect of black carbon is up to five times more than when emitted at lower latitudes, such as in the tropics – just this week there’s been a spill of HFO in the Solomon Islands. In addition, if HFO is spilled in cold polar waters, it emulsifies, proving almost impossible to clean up, and breaks down very slowly. A HFO spill would have long-term devastating effects on Arctic indigenous communities, livelihoods and the marine ecosystems they depend upon. Banning it removes this problem – and any potential costs in doing must be handled at a policy level, without passing costs onto communities.
Many nations have already voiced clear support for the ban – at the IMO, and beyond – Finland, Iceland, Sweden, Norway and the United States, along with Germany, the Netherlands and New Zealand, proposed the ban on the use and carriage as fuel of HFO by ships operating in the Arctic, as the simplest approach to reducing the risks associated with HFO. Their proposal, was supported by Australia, Belgium, Czech Republic, Denmark, Estonia, France, Ireland, Japan, the League of Arab States, Poland, Portugal, Spain, Switzerland, and the UK.
As this week’s meeting opened in London, IMO Secretary-General Kitack Lim said that “with future vessel traffic in Arctic waters projected to rise, the associated risk of an accidental oil spill into Arctic waters may also increase. It is therefore imperative that the [IMO] takes robust action to reduce the risks to the Arctic marine environment associated with the use and carriage of heavy fuel oil as fuel by ships”.
Yet a couple of Arctic nations are fence-sitting: Canada and Russia. Canada was initially enthusiastic – with Justin Trudeau’s government co-signing an agreement with the Obama Administration for a “phase-down” of the fuel from their respective Arctic waters in late 2016. Since then, Canada has back-pedalled, and made much of its IMO proposal to assess the impact of such a ban on Arctic communities. What Canada has so far failed to mention its submissions to the IMO, is that there is widespread, well documented support from indigenous Arctic communities from Canada, the US and Greenland.
Russia has considered a ban on use of HFO in the Arctic as a “last resort”. However, one of the biggest users of HFO in the Arctic, Russian state-owned shipping company Sovcomflot has spoken openly about the need to move away from oil-based fuels, and marine bunker fuel supplier Gazpromneft expects to halt fuel oil use from 2025. Significantly, in August 2018, Russian President Vladimir Putin and Finnish President Sauli Niinisto made a joint statement on the need to move to cleaner ships’ fuels in the Arctic.
The outcomes from this week’s IMO meeting will spawn no great headlines, no grand speeches from heads of state. IMO member states already agreed, back in April 2018, to move forward on an Arctic HFO ban, and recognized the need to reduce the impact in the Arctic of shipping’s black carbon emissions over five years ago.
But what happens this week is critical to addressing the threats to the Arctic, rather than just talking about them. Banning of the the world’s dirtiest fuel from what remains one of the Earth’s remaining pristine environments should be a no-brainer – hardly challenging when compared with the task of reversing climate change globally. Banning HFO use in the Arctic is a quick and simple step in the right direction – alternative fuel options are available, and non-fossil fuel forms of propulsion are on the horizon. The global environment, and this planet’s climate is under severe pressure. Real, positive change is required across the board, from everyone on the planet. Even with the new-found enthusiasm within the shipping industry, there is much work in negating the impact of shipping on our health, and the health of the planet. Ensuring the Arctic is protected from oil spills and black carbon pollution from heavy fuel oil is one these ways – it’s achievable, and within our grasp.
Dave Walsh is Communications Advisor to the Clean Arctic Alliance | <urn:uuid:d736453c-9ed3-4ffc-a7e5-2010c63be0a7> | CC-MAIN-2021-39 | https://www.hfofreearctic.org/en/2019/02/18/ridding-the-arctic-of-the-worlds-dirtiest-fuel/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00152.warc.gz | en | 0.953888 | 1,931 | 2.53125 | 3 | {
"raw_score": 3.0339319705963135,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Sport a rainbow lanyard and show your support for IDAHOTB
May 16, 2018
The University is flying the rainbow flag on Thursday 17 May outside the Old Fire Station to celebrate and show our support for International Day Against Homophobia, Transphobia and Biphobia (IDAHOTB).
The day was created in 2004 to draw the attention of policymakers, opinion leaders, social movements, the public and the media to the violence and discrimination experienced by LGBT+ people internationally. It specifically marks the date when, in 1990, the World Health Organisation removed homosexuality from its list of mental illnesses.
People celebrate May 17th in more than 130 countries, including 37 where same-sex acts are illegal. The many diverse events around the world are hailed as joyful, global celebrations of sexual and gender diversities. At the same time, this annual event highlights the continuing oppression around the globe against lesbian, gay, bisexual and transgender (LGBT+) people.
There is clear evidence that shows the discrimination of LGBT+ people happening all over the world, but also specifically in the UK. Stonewall reports that just under a fifth of lesbian, gay, and bisexual employees have experienced verbal bullying from colleagues, customers, or service users because of their sexual orientation in the last five years. Their research also shows that nearly half (42%) of trans and non-binary people are not living permanently as their true gender identity as they feel they are prevented from doing so because it might threaten their employment status. Stonewall also found that over 10% of trans people experienced verbal abuse at work, and 6% of those surveyed were physically assaulted at work. These figures come from UK studies in the last five years. These are still issues LGBT+ people face within the UK workplace today, and a cultural change must happen to ensure their safety.
The focus for 2018 is Alliance for Solidarity stressing the importance of working together, especially where we need to ensure safety, fight violence, lobby for legal change and/or campaign to change hearts and minds.
What you can do
It is important that you speak out against any small moment of discrimination you see against LGBT+ people. It does not have to be a big gesture, but even having a private conversation with a colleague can help change views and open the conversation surrounding the issues that LGBT+ people face. Every little thing challenged is an opportunity to make the world (and the workplace!) a safer and more pleasant place to be for LGBT+ people.
If you like, you can also collect a LGBT+ rainbow lanyard on 17th May from Maxwell Reception or University House to show your support for the LGBT+ community at the University of Salford, whether that be colleagues, students or visitors to our campus.
Many of our colleagues including our Senior Leadership Team are demonstrating clearly that they STAND AGAINST all forms of homophobia, transphobia and biphobia by wearing their ID cards on our fabulous rainbow lanyards. Take a look by scrolling through our Flickr gallery below.
The most important thing you can do as allies is be aware of the issues that LGBT+ people face, and be mindful of how these issues affect their lives and commit to stand against them.
Please, shout about it all over social media using the hashtag #IDAHOTB, and send your rainbow lanyard selfies to @SalfordProud – we would love to see them and hear what you are doing (no matter how small) to celebrate and raise awareness.
We have also created the following images that you can use on social media to show your support. You can download them by clicking the thumbnails below and saving them to your computer.
Resources you can access:
The LGBT Foundation: https://lgbt.foundation/how-we-can-help-you
The Employee Assistance Programme: 0800 716 017, http://www.healthassuredeap.com
Employee Relations Advice: [email protected], extension 52121 | <urn:uuid:448acefa-0e83-43d5-b5c3-1b8b630dc7e4> | CC-MAIN-2018-39 | http://staff.salford.ac.uk/newsitem/6236 | s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160641.81/warc/CC-MAIN-20180924185233-20180924205633-00375.warc.gz | en | 0.949726 | 821 | 2.6875 | 3 | {
"raw_score": 2.7490766048431396,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Politics |
Built-in gas hob that comes with the safety flame device will have the gas supply cut off if the flame on the burner head is extinguished by a strong wind.
Tempered glass is a safety glass which when broken, the glass becomes small fragments or oval shaped pebbles instead of sharp pointed pieces when broken like normal glass.
By default, all Zanussi gas hob can use the normal LPG gas. If the customer is using natural gas, then he should inform the Electrolux service to change the gas nozzle on the hob to suit for natural gas usage. If the gas nozzle is not changed, then the flame from the burners will be small.
An induction hob creates an electromagnetic field when a metal pan is placed on the cooking surface. This electromagnetic field will then heat up the pan. A ceramic hob works by using heating elements under the glass to heat up the metal pan.
For cooking with an induction hob, you will not see a red glow on the hob glass surface. The pan becomes hot but the hob smooth glass surface outside the diameter of the pan remains safe to touch. However, there is residual heat after long hours of cooking.
For cooking with a ceramic hob, you will see a red glow surrounding the pan and both the metal pan and the smooth glass surface feels hot when touch.
For cooking with Induction hob, you should use only pots and pans which have base made of iron properties that can attract magnet as it uses an electromagnetic field to heat up. For cooking with a ceramic hob, any pots and pans that are made of glass, aluminum, stainless steel, clay can be used for ceramic cooking.
Yes, an induction cooker is faster than a traditional electric cooktop and gas cooker. It allows instant control of cooking energy similar to gas burners. Other cooking methods use flames or red-hot heating elements but induction heating only heats the pot.
No, an induction cooker transfers electrical energy by induction from a coil of wire when the electric current flowing through it. The current creates a changing magnetic field and produces heat. The pot gets hot and heats its contents by heat conduction. The cooking surface is made of a glass-ceramic material which is a poor heat conductor, so only a little heat is lost through the bottom of the pot which incurred minimal wastage of energy when compared with open flame cooking and a normal electric cooktop. The induction effect does not heat the air around the vessel, resulting in further energy efficiency.
An induction cooker is just a source of heat, thus, cooking with an induction cooker has no difference from any form of heat. However, heating is much faster with an induction cooker.
The hob surface is made of ceramic glass, which is very strong and it tolerates very high temperatures and sudden temperature changes. Ceramic glass is very tough, but if you drop a heavy item of cookware, it may crack. In everyday use, however, it is unlikely to crack.
Yes, induction cooker is safer to use than conventional cookers because there are no open flames and electric heaters. Cooking cycles can be set by required cooking duration and temperature, it would automatically switch-off after the cooking cycle has been completed to avoid overcooked food & the risk of damaging the cooker. | <urn:uuid:621bfd1e-f147-47b1-a21a-6cb888e60114> | CC-MAIN-2023-50 | https://www.zanussi.com.eg/en-eg/support/faq/hobs/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00731.warc.gz | en | 0.932831 | 662 | 2.796875 | 3 | {
"raw_score": 2.195451021194458,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Home & Hobbies |
Migraine Study Reveals Evidence of Possible Changes to Brain
Research published yesterday by the journal Neurology shows there is a link between Migraine and the risk of three specific types of structural changes within the brain. But what does this actually mean for patients?
While this evidence may sound frightening to patients who suffer Migraine, more research needs to be done before we know the implications of these particular structural changes and how they relate to the function of the patients who have them, as well as the deeper details of those patients who are at greater risk.
Does frequency or severity play a part? How about the length of time the patient has experienced their Migraines? Genetics? We don’t know. Studies like this help to lay the foundation for more detailed looks at what is happening.
Although it’s scary to receive a diagnosis of structural changes, some of these changes have been researched before, and as yet, have not been found not to affect the function of patients who have them. Now, we have more perspective. You are aware that these changes may not ultimately affect the function of patients who find they have them, so let’s talk about what the study actually said and what it means:
The research itself looked at several studies together. They used the information they found in many studies done by many researchers, and put it all together to try to give us a more complete and diverse picture about this fairly mysterious situation. This is a great way to analyze data, because it helps to minimize questions of bias, etc., that can sometimes come up with single studies, and it utilizes a broader range of patients and their data.
Three types of structural changes were analyzed: white matter abnormalities, infarct-like lesions, and changes in white and grey matter volume. All of these did appear to be significantly more common in patients with these two types of Migraine than in controls (patients without Migraine). The strongest correlation however, was in patients who have Migraine with aura.
Patients with white matter abnormalities were more likely to have Migraine with aura, but not Migraine without aura.
Risk for infarct-like lesions was greater for Migraine with aura than Migraine without aura.
Changes in volume of brain tissue was more common in Migraine patients with and without aura over controls.
Summary and implications:
Sometimes, research like this can leave patients and their loved ones stunned and feeling helpless. Important to remember, is that each of these studies is like a piece to the Migraine puzzle, and the more pieces we have, the clearer the picture becomes. The information here is not a whole, but a part of the whole. This information is important to us though, because it helps to remind us why it's is so important to pay attention to our disease.
Appropriate diagnosis, treatment, and management to keep our Migraine attacks to a minimum are things over which many of us have some control. We already know ignoring our Migraines is not helpful. Not adding insult to injury by adding medication overuse headache to the mix is something we can control. Maximizing our health so we can hopefully minimize the chance that we end up with further problems is also in our control.
Have you and your doctor discussed your own personal plan of attack for getting and keeping the best control over your Migraines?
Please post a comment with suggestions, questions, and comments so we can discuss them.
1 Bashir, Asma, MD; Lipton, Richard B., MD; Ashina, Sait, MD; Ashina, Messoud, MD, PhD. "Migraine and structural changes in the brain." Neurology. Early View. August 28, 2013.
2 Press Release. "Migraine May Permanently Change Brain Structure." American Academy of Neurology. August 28, 2013.
Live your best life,
© Ellen Schnakenberg, 2013.
Last updated August 30, 2013. | <urn:uuid:12373ade-dcff-4797-9578-f5f658ad477f> | CC-MAIN-2016-50 | http://www.healthcentral.com/migraine/c/82083/162619/migraine-reveals-evidence/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542712.12/warc/CC-MAIN-20161202170902-00384-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.948739 | 813 | 2.59375 | 3 | {
"raw_score": 2.9307849407196045,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
Potatoes: How to grow
Growing from seed potatoes is easy and really rewarding and there are many different varieties available.
Seed potatoes are normally categorised by first earlies, second earlies, maincrop and salad varieties.
Allow your potatoes to ‘chit’ (sprout) before planting. You can do this by placing them in a shallow tray (egg boxes are ideal) with any shoots facing upwards.
Position the tray in the light and protect from frost and extreme heat.
Leave the potatoes for roughly six weeks to allow sprouts to grow to about 1.5-2.5cm (0.5-1 inch) tall.
Once the seed potatoes have reached this point, you are ready to plant them out, either in the ground or in an growbag, positioned on its side, so it stands vertically, with the top cut off.
Planting potatoes in the ground
Choose a sunny spot (avoid frost pockets) and dig to break down the soil, removing any clods.
Dig a trench in the soil 15cm (6 inches) deep by 15cm (6 inches) wide, before laying out the seed potatoes with the shoots facing upwards.
Position the seed potatoes 25-30cm (10-12 inches) apart, allowing 60cm (24 inches) between each row. Cover over with excess soil.
Planting Potatoes in a grow bag or container
Fill a potato growing bag, or container filled with 15cm (6 inches) of Gro-Sure Vegetable Growing Compost.
Position 3-4 seed potatoes evenly on top of the compost, with sprouts facing upwards.
Cover potatoes with a further 10-15cm (4-6 inches) of compost and ensure the compost is well watered. First shoots should appear within a few weeks.
The first shoots should appear within a few weeks.
Once the first shoots begin to show, cover with a new layer of soil.
Once the shoots are 5cm (2 inches) tall, cover the shoots with more soil to block out any light. This process is called ‘earthing up.’ Repeat this process twice.
Water well, especially once foliage has formed, watering onto the compost, rather than the foliage.
When your potato plant starts to flower (roughly 12 weeks after planting) it is usually a sign that your potatoes are getting ready for harvesting. Usually aim to harvest at flower drop or when the foliage starts to yellow.
A good guide is that salad crops take about 12 weeks and main crops take about 22 weeks.
The longer you keep your potatoes in the ground, the larger they will be and the bigger the yield.
Carefully remove the whole potato plant by digging up from the side using a garden fork to avoid bruising.
Store in a cool, dark, frost-free place, and wash before use. | <urn:uuid:f12a23e9-e52e-4500-8f65-55c0d1ffddf6> | CC-MAIN-2018-22 | https://www.gardenhealth.com/how-to-grow-seed-potatoes | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864657.58/warc/CC-MAIN-20180522092655-20180522112655-00552.warc.gz | en | 0.922378 | 590 | 3.359375 | 3 | {
"raw_score": 1.0946897268295288,
"reasoning_level": 1,
"interpretation": "Basic reasoning"
} | Home & Hobbies |
Permanent doping of semiconductors and low-dimensional structures to modulate their electronic properties is a well-established concept. Even in cases where doping of thin films by analytes (e.g. carbon nanotubes by ammonia) is applied in sensors, it is only reversed by physical removal of dopant molecules, e.g. heating. We have introduced the concept of molecular switches as chemical dopants for thin nanocarbon (or other 2D-materials) films. These molecules can be switched between doping and non-doping states in the presence or absence of a particular analyte. They impart selectivity not only due to their change in doping behavior, but also by physically blocking other potential dopants in the analyte solution from interacting with the conductive film. The resulting structures can act as chemiresistive films.
Chemiresistive sensors are a well-established technology for gas-phase sensing applications. They are simple and economical to manufacture, and can operate reagent-free and with low or no maintenance. Unlike electrochemical sensors they do not require reference electrodes. While in principle they can be made compatible with aqueous environments, only a few such examples have been demonstrated. Challenges include the need to prevent electrical shorts through the aqueous medium and the need to keep the sensing voltage low enough to avoid electrochemical reactions at the sensor. We have built a chemiresistive sensing platform for aqueous media. The active sensor element consists of a percolation network of low-dimensional materials particles that form a conducting film, e.g. from carbon nanotubes, pencil trace, exfoliated graphene or MoS2. The first member of that platform was a free chlorine sensor.[1-3] We are currently working to expand the applicability of our platform to other relevant species, in particular anions and cations that are commonly present as pollutants in surface and drinking water. Our sensors can be incorporated into a variety of systems and will also be suitable for online monitoring in remote and resource-poor locations.
L. H. H. Hsu, E. Hoque, P. Kruse, and P. R. Selvaganapathy, A carbon nanotube based resettable sensor for measuring free chlorine in drinking water. Appl. Phys. Lett. 106 (2015) 063102.
E. Hoque, L. H. H. Hsu, A. Aryasomayajula, P. R. Selvaganapathy, and P. Kruse, Pencil-Drawn Chemiresistive Sensor for Free Chlorine in Water. IEEE Sens. Lett. 1 (2017) 4500504.
A. Mohtasebi, A. D. Broomfield, T. Chowdhury, P. R. Selvaganapathy, and P. Kruse, Reagent-Free Quantification of Aqueous Free Chlorine via Electrical Readout of Colorimetrically Functionalized Pencil Lines. ACS Appl. Mater. Interfaces 9 (2017) 20748-20761.
P. Kruse, Review on Water Quality Sensors. J. Phys. D 51 (2018) 203002. | <urn:uuid:4887a291-9c49-4277-a76f-82e601a828e5> | CC-MAIN-2023-06 | https://experts.mcmaster.ca/display/publication1904182 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00832.warc.gz | en | 0.911862 | 688 | 2.828125 | 3 | {
"raw_score": 2.9093892574310303,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Shortly after midnight on 26 April 1986, the name Chernobyl became synonymous with nuclear disaster. A nuclear power station in northern Ukraine exploded after a safety test, casting a radioactive shadow over the land.
The literal and figurative fallout of the event continues to inspire a strange mix of fear and hope over the risks of nuclear power. But it’s still hard to know whether the field trial of an apocalypse was ultimately a win or a catastrophe for nature.
A 2,600 square kilometre (1,000 square mile) exclusion zone has accidentally become a sort of nature reserve. Census data shows some populations have boomed without humans around.
But years of studies on the zone’s ecosystems also suggest much of the life hit hard by ionising radiation hasn’t adapted. There are exceptions, opening debate on the future effects of fallout on organisms in areas like Chernobyl and Fukushima.
This is what we know so far.
Black fungus and dead decomposers
Many microbes are quick to take advantage of disasters that spell doom for most other organisms.
Melanin-rich fungi such as Cladosporium sphaerospermum, Cryptococcus neoformans, and Wangiella dermatitidis have become kings of Chernobyl’s inner sanctum thanks to the effects ionising radiation has on their pigments. Research shows not only have these fungi tolerated the power station’s radioactivity – they’ve feasted on it.
It’s not all good news for other decomposers in the surrounding ecosystems, though. While some of the smallest species have taken advantage, others have suffered greatly.
A study published in 2014 found a significant drop in the rate of decomposition of leaf litter at 20 forest sites around Chernobyl, pointing to big changes inside the exclusion zone for its tiniest recyclers.
Red pines and soybeans
Nuclear disasters are often imagined as barren wastelands of dust, dry grass, and stripped trees. If anything, images of Chernobyl’s surrounds show it to be overgrown with foliage. What gives?
Well, not all of the plants did so well in Chernobyl’s wake. Around 10 square kilometres (about 4 square miles) of pine woodland was dubbed the Red Forest after radiation damage turned their leaves brown. Many of the trees were bulldozed and the ground beneath remains one of the most contaminated parts of the exclusion zone.
But plants in other areas have found clever ways to cope with the stress of increased radiation.
A study on soybeans growing within Chernobyl’s restricted zone was compared with plants grown 100 kilometres (over 60 miles) away.
The researchers found the radiated plants weren’t exactly thriving, but managed to survive nonetheless by pumping out proteins known to bind heavy metals and reduce chromosomal abnormalities in humans.
Bird brains and orange feathers
Bird populations were some of the hardest hit by the disaster. One study on 550 specimens covering nearly 50 species found radiation impacted on the bird’s neurological development, with a significant drop in brain volume.
Having brighter feathers also put some species at a clear disadvantage.
The biochemistry used to produce large amounts of pheomelanin also depletes the body’s supply of antioxidants – a useful family of chemicals to have in large amounts if you want to deal with the damage caused by radiation.
Researchers found population declines were stronger in those species had carotenoid colouration and large body masses, suggesting black has become the new orange in Chernobyl bird fashion.
Multitudes of mice and beasts bouncing back
Arguably, the most surprising discovery in Chernobyl’s human-free-zone is the mass of mammals that made a quick return to the forests.
Studies conducted in the mid-1990s on the smaller critters, such as mice and voles, found there wasn’t a noticeable difference in population sizes across the zone’s boundary.
Larger animals, including deer and boar, have also bounced back in recent decades.
The biggest winners seem to be wolves, which have had a field day with large numbers of prey and scarcity of humans. By some estimates there are seven times more wolves inside the zone than outside.
Big numbers doesn’t necessarily mean great health, though.
In the words of University of Portsmouth researcher Jim Smith, “This doesn’t mean radiation is good for wildlife, just that the effects of human habitation, including hunting, farming, and forestry, are a lot worse.”
The Babushkas of Chernobyl
For the most part, humans aren’t allowed to live inside the exclusion zone. Around 350,000 people were evacuated, leaving the land for nature to reclaim.
Some 1,000 people returned to call the radiated land home in the ensuing months, choosing familiarity and solitude in spite of ticking Geiger counters.
Made up mostly of elderly women, these so-called “Babushkas of Chernobyl” are a vanishing people. Today barely one in ten of those returned citizens remain, though the dwindling numbers don’t seem to be a direct consequence of their toxic surroundings, but rather succumbing to old age.
As with most living things in Chernobyl’s ecosystem, it’s hard to draw a line between the risks posed by radiation and thriving in a home abandoned by the modern world. | <urn:uuid:55b4411d-e3e1-45c9-bc9f-21b0fef1fd8c> | CC-MAIN-2019-09 | http://newnowscience.com/nature/32-years-on-this-is-how-life-thrives-in-the-radioactive-chernobyl-zone/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486480.6/warc/CC-MAIN-20190218114622-20190218140622-00618.warc.gz | en | 0.938294 | 1,121 | 4.125 | 4 | {
"raw_score": 3.0068390369415283,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Main / Uncategorized / Mouse over
Name: Mouse over
File size: 488mb
This method is a shortcut cleanmybay.com("mouseover", handler) in the first two variations, cleanmybay.comr("mouseover") in the third. The mouseover event is sent to an. 4 Mar The buttons depressed when the mouse event was fired: Left button=1, Right button=2, Middle (wheel) button=4, 4th button (typically, "Browser Back" button)=8, 5th button (typically, "Browser Forward" button)= If two or more buttons are depressed, returns the logical sum of the values. Properties - Example. In computing, a mouseover, mouse hover or hover box is a graphical control element that is activated when the user moves or "hovers" the pointer over its trigger area, usually with a mouse, but also possible using a digital pen.
Definition and Usage. The mouseover event occurs when the mouse pointer is over the selected element. The mouseover() method triggers the mouseover event, or attaches a function to run when a mouseover event occurs. More "Try it Yourself" examples below. Definition and Usage. The onmouseover event occurs when the mouse pointer is moved onto an element, or onto one of. More "Try it Yourself" examples below. Definition and Usage. The:hover selector is used to select elements when you mouse over them. Tip: The:hover selector.
The mouseover event occurs when a mouse pointer comes over an element, and mouseout – when it leaves. These events are special, because they have a. As a non-techie a mouseover is basically hovering over the item which causes cleanmybay.com() method binds handlers for both mouseenter and. A mouseover is an event that occurs in a Graphical User Interface (GUI) when the mouse pointer is moved over an object on the screen such as an icon, a button. After the BODY tag, HTML code contains the actual mouseover element in the place on your Web page you want the action to take place. When your mouse. The mouseover event fires when the user moves the mouse onto an element. The mouseout event fires when the user moves the mouse out of an element.
Specify custom behavior on mouseover event. mouseover="count = count + 1" ng-init="count=0"> Increment (when mouse is over) count. mouse over sth definition: to use a computer mouse to move the cursor over a particular part of the screen. Learn more. mouseover fires when the pointer moves into the child element as well, while mouseenter fires only when the pointer moves into the bound. You can try out the following example from the jQuery doc page. It's a nice little, interactive demo that makes it very clear and you can actually. | <urn:uuid:dd0d77b8-43b0-461a-bd05-32ba6f69bd9f> | CC-MAIN-2019-09 | http://cleanmybay.com/uncategorized/mouse-over.php | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495367.60/warc/CC-MAIN-20190220170405-20190220192405-00457.warc.gz | en | 0.873668 | 591 | 2.984375 | 3 | {
"raw_score": 1.0145539045333862,
"reasoning_level": 1,
"interpretation": "Basic reasoning"
} | Software Dev. |
CLEO’s 2018 ECCC Symposium Panel
How vulnerable are we? Impacts of Climate of water, food & health.
Impacts of Climate Change in Food Systems & How Food impacts Climate Change
Food and agriculture are significant contributors to and heavily impacted by climate change:
Food production generates up to 30% of global greenhouse gas emissions, and accounts for substantial proportions of land-use change and global water consumption.
Humans waste about ⅓ of the food we produce for consumption or 1.3 billion tons! According to FAO, if Food Waste were to be a country, it would be the 3rd largest greenhouse emitter in the world after China and the US.
The overall impact of climate change on agriculture & food systems is expected to be negative, reducing food supplies and raising food prices.
Global and national modelling studies suggest that yields of major cereals will decline under scenarios of increased temperature, especially in tropical countries. Many regions already suffering from high rates of hunger and food insecurity, including parts of sub-Saharan Africa and South Asia, are predicted to experience the greatest declines in food production.
Elevated levels of atmospheric carbon dioxide (CO2) are also expected to lower levels of zinc, iron, and other important nutrients in crops.
Changes in rainfall patterns- from flooding and drought- threatens our farmers were both extremes can destroy crops. Flooding washes away fertile topsoil that farmers depend on for productivity.
Certain species of weeds, insects, and other pests benefit from higher temperatures and elevated CO2. Just like with vector-borne diseases, like Zika, shifting climates also mean agricultural pests can expand to new areas where farmers hadn’t previously dealt with them.
Rising sea levels, meanwhile, heighten flood dangers for coastal farms, and increase saltwater intrusion into coastal freshwater aquifers—making those water sources too salty for irrigation.
Biodiversity loss, including of critical crop pollinators, and loss of soil quality will both have substantial impacts on global fruit and vegetable supply and thereby on population health.
Diets and consumption patterns affect climate change. As countries develop, diets tend to change in ways that negatively affect both the environment and health. Reducing meat consumption could reduce emissions substantially. And the right diet changes could positively impact not just climate change, but also health. There’s no question that the responsibility for eating lower on the food chain falls heavily on countries like the U.S. with the highest per capita consumption of meat and dairy but in a changing-warming world where water scarcity threatens the world to continue its present agricultural growth, and agricultural land faces pressure with infrastructure development and with protected areas can our planet sustain the 9 billion mouths is bound to have in 2050, just 32 years from now?
Questions to the panel:
Please weigh in on the facts and our current food systems state with regards to climate change impacts? How vulnerable are we? What are the current projections telling us?
Can we have feed a growing world population and meet the climate goals?
Are we seeing climate change impacts at a local level here in South Florida? Give us examples
What are the opportunities at a local policy level? What can be done that we are not currently doing?
What can we do as citizens? | <urn:uuid:a47033ad-b759-4888-b43c-bf28dda338c5> | CC-MAIN-2020-45 | https://www.farmshare.org/post/cleo-s-2018-eccc-symposium-panel | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879673.14/warc/CC-MAIN-20201022141106-20201022171106-00568.warc.gz | en | 0.931139 | 669 | 3.609375 | 4 | {
"raw_score": 2.984800338745117,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
IF YOU DO NOT LEARN REAL HISTORY,
Edith Cavell was a British nurse in World War I who wrote in the April 15th issue of the Nursing Mirror of her plan to bring the Great War to a speedy conclusion. She was executed under orders from William Wiseman the head of MI6 for North America.
She had discovered that the British relief program to feed the Belgian widows and orphans was a fraud used by the Rothschild interests to prolong the war. They did not feed the Belgians. The food was put on rail cars and sent across the lines to feed the German soldiers. The relief efforts were headed by the exceedingly despicable Herbert Hoover who profited from this enterprise. He was a Rothschild business partner in Rio Tinto Zinc. Hoover also organized the relief program to save the Russians from starvation under a fellow Rothschild associate, Joe Stalin.
Ferdinand Lundberg in America’s Sixty Families told us that an associate of JP Morgan went around America in 1915 telling businessmen that America could prolong the war in Europe by promising to enter the war after the elections of 1916. This would be wonderful thing to do because it would bankrupt England and France making America the world financial capital.
That the prolongation of the Great War killed millions, made the Soviet revolution possible which killed more than sixty million and set the horrors of WW II in motion is of no consequence to men who think like bankers in terms of human life.
World War II was unavoidable. Admiral Canaris and General Beck sent two officers to London in March of 1939 to negotiate a surrender which included an arrest of Hitler. This was refused. The Rothschilds said No to peace according to Ambassador Joseph Kennedy. The bankers needed Hitler to play the bad guy. Stalin would have attacked Germany with 20 or 30 thousand tanks even if Hitler had never been born. What would the Germans have done if the Russians had a line of 18,000 artillery firing rounds every few seconds with 20,000 tanks in front and 10,000 planes overhead?
The final secret of WW II might have been that the Allies killed more people after WW II was over than Hitler ever did during the war in those concentration camps. General Eisenhower who was Jewish killed about one million Germans in 200 camps that were nothing more than open fields with no shelter, no food, no latrines and no medicine. Many of these prisoners were civilians ranging in age from 13 to 80.
…Rest of Article at: http://www.fourwinds10.net/siterun_data/history/european/news.php?q=1317836297 | <urn:uuid:d177d7dc-0092-422f-964d-5120e2353f03> | CC-MAIN-2017-30 | https://inpursuitofhappiness.wordpress.com/2011/12/26/you-will-be-dead-really-soon/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424960.67/warc/CC-MAIN-20170725022300-20170725042300-00470.warc.gz | en | 0.972604 | 529 | 2.609375 | 3 | {
"raw_score": 2.8584649562835693,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | History |
Portraiture played an important role in the Elizabethan era. Queen Elizabeth’s portraits conveyed the regal image of a powerful monarch—the steadfast, ageless force behind England.
Owning paintings of the Queen was viewed as a status symbol. Robert Dudley, Earl of Leicester, displayed over 50 paintings at his Castle in Kenilworth, Warwickshire—a bold reminder to guests that he was the man closest to the Queen.
Although Elizabethan artists drew inspiration from the European Renaissance, it was Elizabeth herself who was the national preoccupation. Invoking her image in paintings and literature had the effect of elevating them to a higher level.
Today, we like to think of art as an expression of feelings and beliefs. But in Elizabethan England, flattery was the order of the day—a time when most artists needed wealthy sponsors to survive.
It was the patriotic duty of artists to glorify their queen. Gloriana!
Join us as we marvel at paintings from Queen Elizabeth’s life and discuss some of the symbolism used to project an image of purity, virginity, and majesty.
The ‘Hampden portrait’ (above) is the earliest full-length portrait of the queen, made before the emergence of symbolic portraits representing the iconography of the “Virgin Queen”.
The ‘Pelican’ and ‘Phoenix’ portraits (above) were named after the beautiful pendants worn by the queen (shown just above her hand in each painting).
The Pelican jewel denotes self-sacrifice since a pelican was thought to draw blood from its own breast to feed its young. It represents Elizabeth’s role as mother to the nation and of the Church of England.
The Phoenix is a mythical bird symbolizing rebirth and chastity—an emblem of virginity, carrying the hope that she would be able to continue the dynasty. Elizabeth holds a red rose—the symbol of the House of Tudor.
The Darnley Portrait (above) features symbols of sovereignty—a crown and sceptre—used as props instead of being worn or carried. This Tudor theme would be expanded upon in later portraiture.
Named after a previous owner, it is the source of the face pattern called “The Mask of Youth” which would be used for authorized portraits of Elizabeth for decades to come. The faded oranges and browns would have been crimson red in Elizabeth’s time.
The ‘Sieve Portrait’ (above) depicts Elizabeth as Tuccia, a Vestal Virgin who carried a sieve full of water from the Tiber to the Temple of Vesta to prove her chastity.
Around her are symbols of imperial power, including a column with the crown of the Holy Roman Empire at its base and a globe showing ships sailing west in search of the New World.
The Ermine Portrait (above) symbolizes purity and status. Legend has it that the ermine (of the stoat or weasel family) would rather die than soil its pure white coat—prized as a status symbol that only royalty and nobility could afford to wear. The olive branch and the sword of justice represent the righteousness and justice of Elizabeth’s government.
One of three versions of the same portrait, the Woburn Abbey version (above) is unusual in its landscape format. It depicts England’s victory over the Spanish Armada in the background.
Elizabeth has her back against the storm and darkness of the past, and her hand rests over the New World, signifying England’s expansionist plans for the future.
The Ditchley Portrait (above) was commissioned by Sir Henry Lee (1533 – 1611), of Ditchley, Oxfordshire, who was Queen’s Champion and Master of the Armoury.
After his wife died, Lee lived openly with one of the Queen’s Ladies in Waiting. Needless to say, the queen did not approve. But on a visit to his home in Ditchley, she forgave him—for becoming a “stranger lady’s thrall”.
The portrait is the largest and grandest ever painted of the queen. The symbolism shows just how eager Sir Henry must have been to show his loyalty and subservience.
Elizabeth stands on a globe directly over Oxfordshire as the sun (symbol of the monarch) shines through a stormy sky. The Latin inscriptions say: “she gives and does not expect”; “she can but does not take revenge”, and “in giving back she increases”.
The ‘Hardwick portrait” (above) is all about the dress—typical of extravagant and sometimes outlandish late-Elizabethan embroidery depicting an eclectic mix of motifs from nature. Roses, irises and pansies intermix with insects, animals and fish. The lace ruff alone is a masterpiece.
It is thought that the dress was a New Year’s Day gift to the Queen from Bess of Hardwick—one of the most influential of Elizabeth’s courtiers who became England’s most powerful woman after the Queen died.
The ‘Rainbow portrait’ (above) was painted near the end of Elizabeth’s life—she was in her late sixties. It represents an ageless queen and is full of allegorical symbols. The cloak with eyes, ears and mouths has been given many interpretations by historians and is a fitting symbol to end our journey with.
During her reign of 44 years, English drama flourished through the likes of Shakespeare and Marlowe, the age of discovery was opening up the world through the likes of Sir Francis Drake.
The eyes of the world were upon Elizabeth; people listened to Elizabeth, and people spoke of her the world over.
Elizabeth carries a rainbow, next to which are the words non sine sol iris—”no rainbow without the sun”. | <urn:uuid:9bd8c412-7778-465b-88d2-3504b3dabedb> | CC-MAIN-2020-34 | https://fiveminutehistory.com/gloriana-the-many-faces-of-elizabeth-i/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00532.warc.gz | en | 0.960391 | 1,244 | 3.65625 | 4 | {
"raw_score": 2.7135565280914307,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Art & Design |
Earthquake casualty estimation
|This article needs additional citations for verification. (September 2013) (Learn how and when to remove this template message)|
Recent advances are improving the speed and accuracy of loss estimates immediately after earthquakes (within less than an hour) so that injured people may be rescued more efficiently. "Casualties" are defined as fatalities and injured people, which are due to damage to occupied buildings. After major and large earthquakes, rescue agencies and civil defense managers rapidly need quantitative estimates of the extent of the potential disaster, at a time when information from the affected area may not yet have reached the outside world. For the injured below the rubble every minute counts. To rapidly provide estimates of the extent of an earthquake disaster is much less of a problem in industrialized than in developing countries. This article focuses on how one can estimate earthquake losses in developing countries in real time.
- 1 The need for theoretically estimating human losses in real time
- 2 Pinpointing the hypocenter and magnitude
- 3 Estimates of shaking
- 4 Built environment
- 5 Tracking population and location
- 6 Simplifications
- 7 State of the art
- 8 References
The need for theoretically estimating human losses in real time
For the first few days after an earthquake, practically no information flows from the center of the devastated area. Examples of the initial underestimation of the extent of earthquake disasters in developing as well as industrialized countries are shown in Figure 1. The responsible experts believed for 4 days that the death toll in the Wenchuan earthquake, Mw 8 of May 12, 2008, was less than 10,000.
Speedy arrivals of medical teams and other first responders is essential for saving injured from dying and helping others to get care. Theoretical estimates of the numbers of fatalities and injured within less than an hour of a large earthquake is the only information that can guide first responders to where and how large a disaster has struck. For this reason, the QLARM and the PAGER teams maintain around the clock capabilities to calculate earthquake damage and casualties within less than 1 hour of any earthquake worldwide. No other groups are capable of these detailed analyses. This page can help medical and other responders understand how fast and how accurate loss estimates can be calculated after earthquakes and what should be added to be more helpful.
The estimates of fatalities distributed by email by the QLARM team of the International Centre for Earth Simulation Foundation (ICES) within 100 minutes of the Wenchuan earthquake was 55,000 ± 30,000, which includes the final toll of about 87,000.
For the 2009 L'Aquila earthquake, an M6.3 earthquake, QLARMs estimate of fatalities was 275 ±200, 22 minutes after the event. The final death toll was 287. In both cases, the official fatality count was slow to reflect the true extent of the disasters. Thus, theoretical estimates of fatalities in real time can be useful for reacting with an appropriate disaster relief response, even though these estimates have large error margins. Current QLARM alerts can be found at, the alarms by the US Geological Survey PAGER team are found at.
Pinpointing the hypocenter and magnitude
The location of an earthquake (its epicenter and depth) needs to be known rapidly for estimating losses. It is calculated from the times at which the waves it generates arrive at seismographs surrounding the source. A computer moves the epicenter estimate close to those stations which record the waves first and far from stations that reported the waves later. This can be done within seconds to accuracies of 1 kilometer in regions where dense seismograph networks exist with inter-station distances of about 10 km. For most of the world, this luxury is not available and the worldwide seismograph network has to be used to estimate the location based on teleseismic data (recorded at distances of more than 1,000 km). This means that estimates of the location cannot be calculated before the waves have traveled hundreds and thousands of kilometers to stations that record them.
The race to know about a new earthquake
The following agencies distribute estimates of latitude, longitude, depth, and magnitude of worldwide earthquakes rapidly and with high accuracy. The Geoforschungszentrum, Potsdam, Germany, delivers automatic solutions within 7 minutes (median) for all major earthquakes worldwide. The National Earthquake Information Center of the United States Geological Survey (USGS) delivers solutions that are reviewed by a seismologist within 18 minutes (median) for all major earthquakes worldwide. The European-Mediterranean Seismological Centre delivers reviewed parameters mostly in the European area within 21 minutes (median). The Pacific Tsunami Warning Center and the National Tsunami Warning Center of the National Oceanic and Atmospheric Administration (NOAA) delivers reviewed parameters for earthquakes in the wider Pacific area within 9 minutes (median). These are updated numbers, slightly shorter than discussed in detail earlier.
If the epicenter is incorrect the loss estimate will be uncertain. Errors are introduced in the estimate of position mostly because of the heterogeneity of the Earth. Seismic waves travel with different speeds in different rocks. Uncertainties in real time epicenters estimated by teleseismic means are ±25 km (median).
The depth is important, but uncertain in the top 50 km. The depths of earthquakes range from 0 to about 700 km. Generally, only the earthquakes in the top 100 km are close enough to settlements to cause casualties. The decrease of the wave amplitudes as a function of distance (Figure 2) shows that dangerous intensities, I≥VII, do not exist beyond 30 to 50 km for major earthquakes. Thus, deep earthquakes are usually not of interest for alerts.
The depth of the energy release can be estimated accurately (to within 1 km) if a seismograph station right above the earthquake (or near it) records the waves. This is usually not the case and one has to rely on teleseismic methods to estimate the depth.
The teleseismic method is to measure the time delay with which the wave reflected from the Earth’s surface above the earthquake arrives at a seismograph. The surface of the Earth acts like a mirror. A wave that runs up against it cannot travel into the air, so it is reflected back down into the Earth, traveling to the same seismograph that recorded the direct wave a little bit earlier. The time delay of the reflected wave depends of course directly on the extra distance it has traveled: from the hypocenter up to the surface and back down to the depth of the hypocenter.
This method works fine, if the hypocentral depth Z>50 km because, in that case, the direct and reflected phases (waves) are clearly separated on the record. For shallower depths, the delay is so small that the two pulses on the seismogram are not readily recognizable as separate pulses; it takes filtering techniques to separate and identify them.
It follows that the depth of the shallow earthquakes, those most dangerous, must be assumed to be 25 ±25 km, if there is no other evidence available. This uncertainty is approximately the same as that of the epicenter. There exists a possibility to reduce this error based on historic data, in some cases. For regions where the tectonic style and the faults producing the earthquakes are well known, one may chose a depth assuming it is the same as in past earthquakes for which the depth had been determined accurately.
For earthquake with magnitudes smaller than M7.5, the different agencies mentioned above as issuing location estimates, usually distribute values of M within 0.2 units from each other. For these medium-sized earthquake, the average of the estimates is a reliable determination of the earthquake size. However, for great earthquakes approaching M8 and exceeding it, the initial estimate of M is often significantly too small. This is so because the surface wave M, which is quickly obtained, is defined as proportional to the 20 sec Reighly surface wave, and this wave has a wavelength of about 100 km. It is therefore too short to reliably measure the M of an earthquake rupture exceeding 100 km. In thee cases, an in depth analysis, which takes time, is needed to arrive at the correct M.
As an example, the Wenchuan earthquake of 12 May 2008 had originally been assigned M7.5 in real-time. Later estimates were M7.9 to M8.0. Based on the first estimate, fatalities had been expected to reach a maximum of 4,000, based on the second the maximum had been calculated as 100,000. The observed number of fatalities in this case was 87,000, determined after months (see Figure in the introduction of this page).
Estimates of shaking
The magnitude for great earthquakes is often underestimated, at first. The standard teleseismic measure of the ‘size’ of an earthquake is the surface wave magnitude, Ms, which has to be derived by definition from the surface waves with 20 second period. A more reliable and more modern scale is that of the moment magnitude, Mw.
Variations of the amplitudes recorded at different seismograph stations are due to many reasons, but the mean magnitude derived from reports by many stations that have recorded the earthquake should be fairly stable. Nevertheless, the agencies which report source parameters (GFZ, NEIC, TWC. EMSC) differ in their magnitude estimates by 0.2 units, on average. This value is taken as the uncertainty of the magnitude estimate in real time.
There exists a special problem for great earthquakes; those with M>8. The waves with 20 seconds period, which define Ms, have wavelengths of only about 100 km. This means, they are too short a yardstick to measure the size of ruptures that significantly exceed 100 km in length. For this reason Mw was introduced, being based on wavelengths of about 1000 km. Unfortunately, these long wavelengths do not become available as fast as shorter ones, resulting in initial underestimates of the magnitude of great earthquakes. As an example, for the Tohoku, M9 earthquake of 11 March 2011, the initial estimates were: GFZ M8.5, NEIC M7.9, TWC M7.9, and EMSC M8.0.
The intensity of shaking diminishes away from the earthquake
Strong ground motions damage buildings, sometimes bringing about collapse. Shaking of the ground decreases with distance from the release of energy, the hypocenter, or, more accurately expressed, from the entire area of rupture. To calculate the intensity of shaking at a given settlement, the computer looks up the attenuation (decrease in amplitude) for seismic waves that travel the distance to the settlement in question.
Errors are again introduced through the heterogeneity of the Earth. The loss of energy along the wave path is not exactly the same in all parts of the world. Examples are shown in Figure 2. For poorly studied regions in developing countries, the uncertainty of the estimated intensities can be substantial, as shown by the different curves, because attenuation is poorly known.
Another factor that can lead to variations of observed intensity of shaking is the condition of the soil beneath a particular structure. The waves are amplified in unconsolidated soils compared to hard rock (Figure 3). In important cities, soil conditions and their amplification factors are mapped for microzonation purposes. This type of information is usually not available for settlements in developing countries. One has to assume that the mixture of conditions results in an average loss estimate for the city, overall.
An intensity, I, given in Roman numerals from I to XII, is calculated for each settlement, accounting for the magnitude of the earthquake and its distance, and also accounting for the local amplification, if known.
The built environment is poorly known for some countries. The quality of buildings differs by country and settlement size. For estimating damage to the built environment, one needs to calculate the damage expected for each type of building present in a given settlement. For each settlement one needs to know the distribution of buildings into classes with different resistance to strong shaking. A common scale for classifying building types is the European Macroseismic Scale (EMS98)
The distribution of building types is different in industrialized and developing countries (Figure 4) and also in villages compared to cities in the same country. Many earthquake engineers work on the problem of better defining the world data on building properties (World Housing Encyclopedia).
After one knows the distribution of buildings into classes (histograms on the left in both frames of Figure 4), one needs to estimate how the population is distributed into these building types (histograms on the right in both frames of Figure 4). These distributions are not identical because the higher quality houses tend to shelter more people per building.
The Haiti earthquake, M7.3 of 12 January 2010 showed that in this case the quality of construction was vastly underestimated by the engineering community. Each new damaging earthquake serves as a source of new information on building properties in the region. In the immediate aftermath of the Haiti earthquake of 12 January 2010, a joint study for the estimation of damage to the building stock based on aerial images was carried out by UNITAR-UNOSAT, the EC-JRC, and the World Bank/ImageCAT in support of the PDNA. Hancilar et al. (2013) have developed empirical fragility functions based on remote sensing and field data for the pre-dominant building typologies (http://earthquakespectra.org/doi/abs/10.1193/121711EQS308M). The international project Global Earthquake Model (GEM) has the aim of producing a world map of earthquake risk. As part of this gigantic effort, data sets will be improved, which are also needed for real time loss assessments. One of these is the data set on world housing properties.
Deaths from collapsing buildings
The probability that a building of a given type may collapse if subjected to a certain intensity of shaking (Figure 5) is an important parameter for calculating expected human losses. The weak buildings that are present in developing countries (Figure 4 on the left) are the ones that are likely to collapse at moderate intensities (Figure 5 on the left).
The numbers of fatalities and injured (casualties are the sum of these two parameters) are estimated, using a casualty matrix, a table which gives the percentages of dead, injured, and unscathed among the occupants of a building that collapses. This distribution depends strongly on the building type.
A building need not collapse to injure and kill; at every damage degree there exists a probability that casualties will result.
The data in casualty matrices are so poorly known that we cannot give uncertainties here. However, specialists are working on learning more about this and related problems in estimating losses due to earthquakes.
Tracking population and location
Population at risk in a given quake
One would think that one can simply look up the population in all settlements of a country in its census. However, that is not the case for the countries we are targeting. Data sources on the web include the World Gazetteer, the National Geospatial-Intelligence Agency (NGA), and GeoNames for population by settlements. However, these lists are incomplete, omitting small settlements. In many countries the sum of the population listed by the above-mentioned organizations equals only 50% to 80% of the total population as estimated in The World Factbook of the CIA. Also, many settlements are listed without coordinates, and others with coordinates but not population.
Variations of occupancy rate as a function of the time of day and the season. The worst time for an earthquake to strike is the night because most of the population is indoors. The time when the consequences are less serious are the morning and evening hours, when farmers are out of doors and office and factory workers are commuting. The fluctuations in occupancy rate have been estimate to be about 35%.
In areas with strong seasonal tourism, the population may fluctuate up to a factor of 10. These fluctuations depend strongly on the location. Currently, there exists no worldwide dataset to account for this effect in loss estimates.
Simplifications are needed because the world is too large for details everywhere.
If one wanted to estimate in real time what damage is to be expected for critical facilities (e.g. a nuclear power plant, a high dam of a reservoir, bridges, hospitals, schools) one would have to know quite a few additional details. For example, the type of soil the facility is resting on, the blueprints of the construction to calculate its response to different frequency waves, and the frequency spectrum radiated by the earthquake. It can be done, but it is costly. In developing countries, not all of this information is available.
In estimating losses in real time, one must take advantage of the fact that some buildings are built to code, others are not, some are located on hard rock, others on unconsolidated sediments, and the earthquake may radiate more energy in one direction than in another. Summing up expected losses assuming average conditions may end up approximately correct, although local fluctuations in the results exist.
Models for settlements
Photographs taken from space or from air planes are very useful for assembling a database for the built environment of a city. Even on images which have not been enhanced the size and type of buildings, as well as the building use can clearly be identified (Figure 6). Neighborhoods of residential buildings all of similar construction, and industrial zones can be mapped.
The height of buildings can be estimated from the shadows they cast in photographs from space and from the air. Based on height, estimates 3D models of cities can be constructed, as shown in the example of Central Bucharest (Figure 7). Governmental office buildings can be seen at the center, whereas small residential buildings dominate in the East.
Adding photographs of the facades shot from street level, detailed, realistic models of cities can be built (Figure 8). With this added information, it is possible to better classify the construction type of each building and to deepen the detail of the model of the built environment necessary for accurate estimates of losses due to earthquakes.
However, the number of settlements in the world for which population data are available exceeds one million. For each, coordinates, name, and an estimated population is available, but it is impossible to analyze all of them in the detail as shown in Figures 6, 7, and 8. There is no choice, but to place the entire population at one coordinate point, regardless of the settlement’s size, and to assign each settlement a standard distribution of buildings into classes of different earthquake resistance. The only refinement one can afford is to have different standard models for different countries and for at least three settlement size for each country.
In an ideal case, one would like to have detailed information on every building and its occupants. However, with thousands of large cities at risk and hundreds of millions of inhabitants in them, this is too costly. A cost-effective way to model a large city is to treat each administrative district as a separate settlement.
Expected mortality by city district
In many large cities, the census contains information on population and building stock by district. A model of a city in which each district has its own distribution of buildings into classes and its population, is far superior to the basic, primitive model of one coordinate point. If one has the resources to divide a large city into neighborhoods containing similar building stock, then a high quality model can be constructed at a still moderate cost. An example of the mortality rate estimates in case of a future M8 earthquake off Lima, Peru, shows that there are substantial differences between districts (Figure 9). The differences are due to the distance from the assumed source, the type of soil, and the quality of the building stock. In addition to the mortality calculation for the entire population, information on the locations and expected damage state of schools, hospitals, fire stations, police posts, and critical facilities would be of great value for rescuers. However, to develop this type of information requires a more substantial effort in countries where the location and construction quality of these facilities are not known.
Calculating the likely functionality of hospitals after earthquakes requires specialized expertise. In some cities, elaborate efforts by commercial enterprises have been carried out or are under way to catalog information on a neighborhood level, more detailed than shown in Figure 9. In industrial countries details of each house with street address are often known.
State of the art
Uncertainties in real-time estimates
Uncertainties in real-time estimates of human losses are a factor of two, at best. One may group the seriousness for introducing errors in the loss estimates due to uncertain input, into three classes: serious, moderate, and negligible.
The size of the most serious errors is an order of magnitude (meaning a factor of 10). They can be generated by hypocenter errors, incorrect data on building stock, and magnitude errors for M>8 earthquakes. Wrong assumptions on the attenuation of seismic waves may introduce errors of a factor of 3.
Moderate errors, typically about 30%, can be introduced by variations of magnitude for M<8, soil conditions, and directivity of energy radiated. Other inaccuracies in data sets or input contribute errors that are negligible compared to the aforementioned uncertainties.
Existing earthquake loss alert services
By email, the QLARM team is distributing estimates of human losses (numbers of fatalities and injured), in addition to calculations of mean damage for each settlement in their database, following earthquakes worldwide since October 2003. Up to May 2010, these estimates were based on a program and data set called QUAKELOSS, since that time the alerts are based on the second generation tool and data set called QLARM, including a map showing the mean damage expected for affected settlements. The first 10 years of near-real-time earthquake alerts by this team can be found in. Recent alerts can be found on the web page of the International Centre for Earth Simulation Foundation (ICES), Geneva http://icesfoundation.org/Pages/QlarmEventList.aspx.
The National Earthquake Information Center of the USGS has been issuing PAGER alerts by email since April, 2009. They contain a color code reflecting the seriousness of the event, the number of people estimated to have been exposed to the various likely intensity levels, tectonic information about the epicentral area, and consequences that had resulted from previous nearby earthquakes.
Global Disaster Alert and Coordination System (GDACS)) has been issuing color-coded earthquake alerts since September 2005. These reports contain comments on the socio-economic conditions of the epicentral area. As a measure of the level of seriousness, they use only the number of people within set radii of distance. This information can be misleading because the parameters, which control the extent of a disaster, are ignored (magnitude, depth, transmission properties, building stock characteristics, and time of day).
Estimating losses due to tsunami
The methods explained here concern only losses due to strong ground motions. Damage due to tsunamis is not included. The community researching tsunamis is currently struggling with the problem of making a rapid decision after an earthquake whether or not a tsunami has been generated, how high it might be in the open ocean, and finally what local run ups should be expected. Methods to calculate what happens to the built environment when a wave strikes are not yet developed.
Improvements in accuracy
Human losses can be estimated with sufficient accuracy to assist disaster response to mobilize in adequate ways. Inconsequential events can be identified in 99% of the cases, which means that rescue teams do not need to waste time and energy to needlessly mobilize. Although the uncertainties in estimating human losses in real time are large, they allow one to immediately identify disastrous cases that need attention. Some of the uncertainties in the input parameters cannot be improved and will remain as error sources. However, the uncertainty in other parameters, especially databases, can be reduced by research. Some of the important parameters have hardly been investigated. Because many people are working on this problem, real time estimates of human losses after earthquakes will become more accurate and more useful.
- Wyss, M. (2004), Earthquake loss estimates in real-time begin to assist rescue teams, worldwide, EOS, 85(52), 567.
- Wyss, M. (2014), Ten years of real-time earthquake loss alerts, in Earthquake Hazard, Risk, and Disasters, edited by M. Wyss, pp. 143-165, Elsevier, Waltham, Massachusetts.
- Wyss, M., Rosset, P. & Trendafiloski, G., (2009a). "Loss Estimates in Near-Real-Time After the Wenchuan Earthquake of May 12, 3008". In Ning, L.; Wang, S.; Tang, G. International Disaster and Risk Conference. Chengdu, China: Qunyan Press. pp. 381–391.
- "Magnitude 7.9 - EASTERN SICHUAN, CHINA".
- 3. List of alerts can be found on www.wapmerr.org.
- "Magnitude 6.3 - CENTRAL ITALY".
- Allen, R. M., and H. Kanamori (2003), The potential for earthquake early warning in Southern California, Science, 300, 786-789
- "National Earthquake Information Center (NEIC)".
- http://earthquake.usgs.gov/earthquakes/recenteqsww/ Archived March 12, 2011, at WebCite
- "GEOFON Program".
- Wyss, M.; Zibzibadze, M. (2010-02-01). "Delay times of worldwide global earthquake alerts". Natural Hazards. doi:10.1007/s11069-009-9344-9. Archived from the original on February 1, 2010.
- Wyss, M., Elashvili, M., Jorjiashvili, N. & Javakhishvili, Z. (2011). Uncertainties in teleseismic epicenter estimates: implications for real-time loss estimate, Bulletin of the Seismological Society of America, in press.
- Richter, C. F. (1958). Elementary Seismology. San Francisco: W. H. Freeman and Company.
- Bullen, K. E. (1963). An Introduction to the Theory of Seismology. Cambridge: University Press.
- Kind, R.; Seidl, D. (1982). "Analysis of Broadband Seismograms from the Chile-Peru Area" (PDF). Bulletin of the Seismological Society of America. 72: 2131–2145.
- Murphy, J.R.; Barker, B.W. (2006). "Improved Focal-Depth Determination through Automated Identification of the Seismic Depth Phases pP and sP". Bulletin of the Seismological Society of America. 96: 1213–1229. Bibcode:2006BuSSA..96.1213M. doi:10.1785/0120050259.
- Devi, E.U., Rao, N.P. & Kumar, M.R. (2009). "Modelling of sPn phases for reliable estimation of focal depths in northeastern India". Current Science. 96: 1251–1255. ISSN 0011-3891.
- Chu, R., Zhu, L. & Helmberger, D.V. (2009). "Determination of earthquake focal depths and source time functions in central Asia using teleseismic P waveforms" (PDF). Geophysical Research Letters. 36 (L17317). Bibcode:2009GeoRL..3617317C. doi:10.1029/2009GL039494.
- Wyss, M.; Rosset, P. (2011). Approximate estimates of uncertainties in calculations of human losses in earthquakes due to input errors (Internal Report). Geneva: WAPMERR. pp. 1–15.
- Shebalin, N. V. (1968), Methods of engineering seismic data application for seismic zoning, in Seismic zoning of the ussr, edited by S. V. Medvedev, pp. 95-111, Science, Moscow.
- Ambraseys, N.N. (1985). Intensity attenuation and magnitude intensity relationships for western European earthquakes, Earthquake Eng. Struct. Dyn., 13, 733-778.
- Seismic Waves and Earth's Interior, An Introduction to Earthquakes, Saint Louis University.
- Project Activity and Findings, Pacific Earthquake Engineering Research Center (PEER), in partnership with the U.S. Geological Survey and the Southern California Earthquake Center.
- Gruenthal, G., (1998). European Macroseismic Scale 1998. in Cahiers du Centre Européen de Géodynamique et de Séismologie, Conseil de l'Europe, Luxembourg.
- Porter, K. A., K. S. Jaiswal, D. J. Wald, M. Greene, and C. Comartin (2008). WHE-PAGER Project: a new initiative in estimating global building inventory and its seismic vulnerability, 14th World Conf. Earthq. Eng., Beijing, China, Paper S23-016
- Spence, R., So, E. & Scawthorn, C., Human Casualties in Natural Disasters: Progress in Modeling and Mitigation. in Advances in Natural and Technological Hazards ResearchSpringer, Cambridge, UK., 2011.
- Spence, R.J.S. & So, E.K.M., Human casualties in earthquakes: modelling and mitigation. in Ninth Pacific Conference on Earthquake Engineering, Auckland, New Zealand, in press, 2011.
- "World Gazetteer". archive.is. Archived from the original on 4 December 2012.
- "The World Factbook".
- Scawthorn, C. (2011). "Disaster casualties - accounting for economic impacts and diurnal variation". In Spence, R.; So, E.; Scawthorn, C. Human Casualties in Natural Disasters: Progress in Modeling and Mitigation. Cambridge.
- Trendafiloski, G., Wyss, M., Rosset, P. & Marmureanu, G. (2009). "Constructing city models to estimate losses due to earthquakes worldwide: Application to Bucharest, Romania". Earthquake Spectra. 25 (3): 665–685. doi:10.1193/1.3159447.
- Wyss, M., Trendafiloski, G., Rosset, P. & Wyss, B. (2009b). Preliminary loss estimates for possible future earthquakes near Lima, Peru, with addendum (Internal report). Geneva: WAPMERR. pp. 1–65.
- Wyss, M., G. Trendafiloski, M. Elashvili, N. Jorjiashvili, and Z. Javakhishvili The mapping of teleseismic epicenter errors into errors in estimating casualties in real time due to earthquakes worldwide, abstract, presented at European Geosciences Union General Assembly, Vienna, EUG2011-9938, April 4, 2011.
- "PAGER - Prompt Assessment of Global Earthquakes for Response".
- "GEM Foundation". | <urn:uuid:acddcc7e-700c-42ce-95c8-0c6cf0dd7f10> | CC-MAIN-2017-30 | https://en.wikipedia.org/wiki/Earthquake_casualty_estimation | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424756.92/warc/CC-MAIN-20170724062304-20170724082304-00336.warc.gz | en | 0.919983 | 6,511 | 2.859375 | 3 | {
"raw_score": 3.037257432937622,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
I've recently had a discussion about climate change and how we should adjust our politics. My discussion partner was agreeing that we have to change something, but only if every other major player does so as well. Otherwise, Germany would economically hurt itself without achieving anything.
In this post, I will briefly talk about what climat change is, how it affects humanity, what nations can do and what single people can do.
What is Climate Change?
Climate is the long-term average of weather, typically averaged over a period of 30 years. Climate change is the (drastic) change of this.
It's important to stress that it is about long-term averages. So single days, even weeks, in small areas are not important. Locally or short-term, things do change quickly. But globally and long-term, this is rather an exception.
Most importantly, there is global warming.
Effects of Climate Change
See IPCC-2018-SPM for many details and 36C3 - Science for future? for a nice talk
Too hot to Live
Some regions on Earth will have days, maybe even weeks or months where humans cannot survive outside.
Rising Sea Level
The water will rise. Island nations might disappear13:
- less than 1.5 °C: +0.5m
- 1.5 °C to 2.0 °C: +0.5m, but not certain. 250 million people have to move.
- 3.0 °C to 4.0 °C: +0.5m and more. 1000 million people have to move (including New York, Mumbai, Shanghai, Hamburg)
- more than 4.0 °C: +1m; the melting arctic and greenland will make the sea level rise by 50m (!)
The issue with the rising sea level is also that it is salt water. So it might make fertile land become unusable for agriculture.
With floodmap you can get a feeling for how problematic the changes are:
Corals could die, because of Coral bleaching.
When corals die, then a lot of the ecosystem dies. This affects fishing.
A wildfire is an uncontrolled fire in an area of combustible vegetation occurring in rural areas.
They happen pretty often in Australia, California, Michigan.
Notable wildfires of 2019:
- 2018–19 Australian bushfire season burned 4 000 000 ha
- 2019 Siberia wildfires fires burned 3 000 000 ha
- 2019 Eastern Seaboard (Australia) fires burned >2 000 000 ha, killed 8
- 2019 Alberta wildfires burned 883 414 ha
- 2019 California wildfires burned 102 472, killed 5
- 2019 Amazon rainforest wildfires burned 40 000 ha, killed 2
- 2019 United Kingdom wildfires burned 16 000 ha.
- 2019 Bandipur forest fires (India) burned 4 420 hectares
- 2019 Nelson fires (New Zealand) burned 2 400 ha
See The effects of climate change on water shortages
If it becomes too hot, if the salty water of the sea rises, if catastrophies like hurricanes increase, then we will produce globally less food. This means people will starve.
Just to sow you that famines are still happening:
- Famine in Yemen (2016–present): Over 17 million of Yemen's population are at risk
- 2017 Somali drought: affected more than 6 million people
- 2017 South Sudan famine: affected 5 million people
Refugees and War
If people are starving, they will try to fix that problem. The easiest ways to do so are either to go somewhere else or to take food / good land / resources from somebody else. Think of the region around Isreal and access to drinking water.
The above effects are hopefully enough to make anybody aware that this might be a topic which we should think about working on.
To estimate how much money it is worth to put into this issue, look at the cost of reaching the climate targets and the cost of not doing so:
|Target||Cost to reach||Expected Economic Damages|
|1.5 °C||?||300 billion USD|
|2.0 °C||?||20 trillion USD|
|3.0 °C||?||15% - 25 % reduction in per capita outcome|
|4.0°C||?||30 % reduction in per capita outcome|
What politics can do
In the following, I will quickly summarize the idea of emissions trading and the CO2 budget. This is the only reliable way to reach the climate goals.
There is a CO2 equivalent for other gases. According to mdr Wissen1, the production of the CO2 equivalent within Germany in 2017 is as follows:
- 88% CO2
- 6% Methane
- 4% N2O
This means Germany can and should focus on CO2 and Methane.
There are many small-scale discussions in Germany (Speed limit to 120km/h, CCS, CDR, higher energy standards for lightbulbs and houses).
The problem is, it is by far not enough. We need all of it and more.
We can see CO2 as a finite resource. We have a certain CO2 budget which we may emmit each year. And we already have a good instrument to deal with limited budget: Money.
If we want to have a 50/50 chance to stay below the 1.5°C target, then the world has a budget of 480 Gt CO2. As Germany has 1.07% of the worlds population, one could argue it has 5.1 Gt CO2 budget. Not per year. In total. Ever.
It's hard to make plans for the end of time, so let's keep it at 50 years. This means we have a budget of 100 Mt per year. In 2017, we were at 796 Mt.
Let's assume we had a budget of 100 Mt for 2019. Now Germany could sell certificates. Companies could buy them and they would need to have a certificate for each ton of CO2-equivalent they emmit. If they emmit more, they have to buy from another company or plant 2 trees (and keep them alive for 40 years) per ton they emmitted. As a hornbeam of 22 years costs 890 EUR this means each ton of CO2 has to be punished by 3236 EUR.
The 696 Mt we emmited too much in 2017 would therefore cost 2 252 256 million Euro. Here you can see that planting trees is not the a golden bullet solution. We need to reduce the emmissions.
Making CO2 certificates expensive and applying them everywhere will increase the cost of many goods. That will be hardest for low-level income families. However, it is also a chance for social justice: The money that Germany receives via the CO2 certificate auction can be distributed to every German citicen equally. This way, if a person does use less than their share of the CO2 budget, they actually are better off at the end. People who use more, e.g. by flying on vaccation, pay for the harm they do to all of us.
This point is made well in CO2-Steuer - sinnvolle Maßnahme oder unfaire Belastung? by Joul.
What we can do
There are many small things:
- Reduce meat consumption
- Reduce milk product consumption
- Eat food which was grown locally
- Drive less by car, more by public transportation
- Drive less by public transportation, more by bike
- Look for a work place close to your home / an appartment close to your work / remote work options
- A Freight bicycle could help to do more with the bike
- Don't fly far away. There are nice places close to you. And if you don't have to spend 700 EUR on flight tickets, you can spend more on whatever you enjoy.
- Use less concrete for house building
- Insulate your house well to reduce heating costs
- Use the sun - add solar pannels / solar thermal energy
Here you can get a feeling for which action has which effect:
|Name||CO2 Kilos Equivalent||Source|
|1kg Lamb||39.2 kg||4|
|1kg Beef||27.0 kg||4|
|1kg Cheese||13.5 kg||4|
|1kg Pork||12.1 kg||4|
|1kg Turkey||10.9 kg||4|
|1kg Chicken||6.9 kg||4|
|1kg Tuna||6.1 kg||4|
|1kg Eggs||4.8 kg||4|
|1kg Potatoes||2.9 kg||4|
|1kg Rice||2.7 kg||4|
|1kg Nuts||2.3 kg||4|
|1kg Beans/tofu||2.0 kg||4|
|1kg Vegetables||2.0 kg||4|
|1kg Milk||1.9 kg||4|
|1kg Fruit||1.1 kg||4|
|1kg Lentils||0.9 kg||4|
|1L Gasoline||2.32 kg||5|
|1L Diesel||1.65 kg||5|
|1L LPG||1.79 kg||5|
|1L CNG||1.63 kg||5|
|Driving 100 km with VW Golf (gasoline)||13.22 kg||5.7L/100km * 2.32 kg / L * 100 km|
|Driving 100 km with VW Golf (Diesel)||8.91 kg||5.4L/100km * 1.65 kg / L * 100 km|
|Driving 100 km with VW Golf (CNG)||6.85 kg||4.2L/100km * 1.63 kg / L * 100 km|
|Driving 100 km with Opel Astra (Benzin)||15.08 kg||6.5L/100km * 2.32 kg / L * 100 km|
|Driving 100 km with Opel Astra (Diesel)||9.08 kg||5.5L/100km * 1.65 kg / L * 100 km|
|Flying 100km (per person)||38 kg||9|
|1 kWh electricity||0.474 kg||statista|
|1 year of AmazonBasics E27||58.13 kg||14 W * 365*24h * (0.474/1000) kg / Wh|
|10h Laptop usage||0.26 kg||55 W * 10h * (0.474/1000) kg / Wh|
|1 year of old refridgerator (90L)||185.80 kg||392 kWh * 0.474 kg / kWh, Röhling|
|1 year of new refridgerator (90L)||74.42 kg||157 kWh * 0.474 kg / kWh, Röhling|
What you should take from this table:
- Updating your old refridgerator helps A LOT
- Flying is likely also an easy point where you can reduce your carbon footprint a lot
- Changing your diet (e.g. going from beef to chicken or even to no meet) helps as well
- Single lightbulbs don't matter that much. Remember, that this can quickly become big if you have many and if you let them run for a long time. Astonishingly, a laptop isn't sooo much worse either.
... China and India?
For sure, we cannot reach the climate targets if the big players (China, USA, India, Russia, Japan, Germany, Iran, Korea, Saudi Arabia, Indonesia - see 12) are not taking part.
However, if everybody waits for others to start, nothing will ever happen. It is also a fair argument to say that the western states have used a lot of their carbon budget in the past years already. The time, when China and India had very little emissions.
It is way easier to build pressure to change if we go ahead as a leading example.
Even if we assume that the world will not follow and we will not meet the climate targets anyway: Adding a big carbon tax will incentivize changes which help us in other places:
- Improving energy efficiency: Will help to reduce your electricity bill
- Traveling less by plane: Could push our local economy
- Switching to renewable energy: More renewable energy has the chance to build a more distributed, more stable energy grid. It could make us less dependent on foreign nations. This will be especially important when it comes to a climate crisis.
- Switching to electric / CNG / LPG cars: Less particulates will help to make our cities cleaner. This will make us, the people who live in cities healthier.
- Improve the diet: Do I really have to write that this will very likely make you healthier if you eat less meat?
- Localize the diet: Again, this makes us more robust against changes. We have it in our control to have enough food.
Kristin Kielon: Die Top 5 Der CO2-Verursacher Deutschlands nach Sektoren in mdr Wissen, 14.10.2019. ↩
Klima-Orakel: Wie viele Bäume sind nötig, um eine Tonne CO2 zu binden?, 18.06.2009. ↩
Tanya Lewis: The top 10 foods with the biggest environmental footprint, 19.09.2019. ↩
Kraftstoffverbrauch: So viel CO2 stößt Ihr Auto aus in DHZ, 08.08.2019. ↩
Spritmonitor.de: Benzinverbrauch: Volkswagen - Golf (Benzin) ↩
Horst Schwarz: Info: CO2-Ausstoss von Flugzeugen im Vergleich zum Auto, 23.09.2017. ↩
Entwicklung des CO2-Emissionsfaktors für den Strommix in Deutschland in den Jahren 1990 bis 2018 ↩
Marc Röhling: Raus mit dem alten Kühlschrank?, 2017. ↩
Die zehn größten CO2-emittierenden Länder nach Anteil an den weltweiten CO2-Emissionen im Jahr 2018 on Statista, 2019. ↩
Was passiert bei 1,5 Grad mehr? Was bei 2, 3 und 4 Grad?, 28.11.2015. ↩
Marshall Burke, W. Matthew Davis & Noah S. Diffenbaugh: Large potential reduction in economic damages under UN mitigation targets in Nature, 2018. ↩ | <urn:uuid:57a8cebd-b3b1-4f33-a0e2-9e0eb85c4831> | CC-MAIN-2023-23 | https://martin-thoma.com/climate-change/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650264.9/warc/CC-MAIN-20230604193207-20230604223207-00136.warc.gz | en | 0.877942 | 3,229 | 3.203125 | 3 | {
"raw_score": 2.5686795711517334,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Science & Tech. |
Partial re-post from: https://disenchantedscholar.wordpress.com/2017/03/30/racial-realities-mixed-race-fertility-and-neanderthals/
I wanted to expand a little because it’s ridiculous that I’m the top search result and I want to encourage public, detailed research on this topic.
Speciation is an ongoing process, it’s part of evolution, also an ongoing force. As members of a sub-species, better known as race, continue to diverge over time, the characteristic event will be infertility, fertility issues, birth defects and miscarriage. Once it is born, a failure to thrive and reproduce itself would also count as an adverse selection pressure.
My simple question: do we see this?
Oh, boy. Grab a drink, tall one.
The mixed-race dating pool is limited, to the other mixed-race, for example.
This lowers the potential fitness of the organism, compared to its parents’ baseline.
I’ll take a biomedical approach, from the limited information available.
“From EurekaAlert, Asian-white couples face distinct pregnancy risks…:”
Asians have a lower median birth weight, a racial difference as real as shorter African gestation periods compared to Whites.
“Although past studies have looked at ethnic differences in perinatal outcomes, the majority of research has focused on white- African-American couples. Few studies have focused specifically on Asian-white couples, said El-Sayed, who is also associate chief of maternal-fetal medicine.
More specifically, the researchers found that white mother/Asian father couples had the lowest rate (23 percent) of caesarean delivery, while Asian mother/white father couples had the highest rate (33.2 percent). Because birth weights between these two groups were similar, the researchers say the findings suggest that the average Asian woman’s pelvis may be smaller than the average white woman’s and less able to accommodate babies of a certain size.”
Nature is trying to tell you something there.
There is a clear natural selection pressure exerting itself.
Also, C-section birth puts the baby at a distinct disadvantage, those children have a weakened immune system, poorer health and fare worse in pair bonding.
Learning disability is on the tag list. Look for yourself.
It doesn’t decrease infant mortality and can actually kill the mother over time.
It’s serious surgery.
“It becomes routine but it is still a major surgery. That carries a long-term effect on maternal health.”
“Compared with women having a vaginal birth, those having a C-section for the first time have… a 5.7 times greater risk of an unplanned hysterectomy”
Nature is telling you something there.
“El-Sayed and his colleagues also found that the incidence of gestational diabetes was lowest among white couples at 1.61 percent and highest among Asian couples at 5.73 percent – and just under 4 percent for Asian-white couples. These findings weren’t altogether surprising: past studies have shown an increased risk of diabetes among Asian couples, which researchers attribute to an underlying genetic predisposition. But the interesting finding, El-Sayed said, was that the risk for interracial couples was about the same regardless of which parent was Asian.”
Dominant genes? No!
“Because of the results on Caesarean section rates they adduce that there is a pelvic size difference between Asian women and white women. Objective male observer acquaintances of mine have generally tended to back up this phenotypic difference between the populations.”
They’re shaped like pre-pubescent boys. Why else get surgery?
You should study it formally though. Asians have the lowest sexual dimorphism and it’s important to know the numbers.
“Although births of multiracial and multiethnic infants are becoming more common in the United States, little is known about birth outcomes and risks for adverse events. We evaluated risk of fetal death for mixed race couples compared with same race couples and examined the role of prematurity and low birth weight as potential mediating risk factors.”
Miscegenation doesn’t work, even with modern medicine.
This applies to black-white pairings too.
It is a disgrace adults are marrying without knowledge of the biology involved.
“Mixed race black and white couples face higher odds of prematurity and low birth weight, which appear to contribute to the substantially higher demonstrated risk for stillbirth. There are likely additional unmeasured factors that influence birth outcomes for mixed race couples.”
I cannot find a stillbirth study for Asian-White pairings, I’m sorry. Is it so common they need not study it?
I am looking, nobody is studying it.
I’m sorry, I am looking. It would be nicer if fewer babies were dying.
We have anecdotes?
“Most people don’t discuss miscarriages because you worry your problems will distance you or reflect upon you — as if you’re defective or did something to cause this.” Mate choice is something you did. The baby didn’t choose to be conceived by you two. Part of your biology must be defective because miscarriage is an outcome of defective conception and/or pregnancy (there are many possible reasons, some environmental, a few random plus ‘stress’). It sounds cruel but yes, medically, something is wrong.
OT: Jews have a non-White miscarriage rate.
Jews invented/funded IVF because they needed it.
Israel is a eugenic ethnostate.
“The issue of the rate of recurrent miscarriages in high-risk Jewish women is unresolved.”
I am biting my tongue.
When trying really hard, the only evidence for hybrid vigour in White Americans vs. mulattos, which they sought to prove (scientism) is “relatively small.” …Is it present or not?
“this study provides evidence [DS: the evidence isn’t proof?] that increased stature and cognitive function have been positively selected in human evolution, whereas many important risk factors for late-onset complex diseases may not have been.”
That’s bullshit, everyone is getting taller and getting better grades.
May not have been? In Nature?
Listen to the twisting in this: http://www.medicaldaily.com/g00/interracial-couples-may-make-taller-smarter-children-due-greater-genetic-diversity-341348
“Meanwhile, human evolution is more focused on the ability to create healthy offspring and have them survive infancy to continue raising them.”
…Yes, it is.
“Whether you come from a genetically diverse background or not, in the end even the most common medical ailments that affect society will affect everyone, with genetic diversity having little to no impact.”
No, genes. The most common fatal medical ailments aren’t a cold, they’re genetic-based, it’s established fact. And if it had no impact, why push it?
“It combines the parents’ genetic material, resulting in offspring that possess a unique set of genetic blueprints that increase their chances of surviving and thriving compared to a population with limited genetic variability.”
No such thing. Limited genetic variability? No such thing. Where is this thing?
They’re just talking absolute crap to cover how their study was a non-result. Every genome is unique, between twins even. Thriving and surviving varies by individual genome, that should be studied by the natal people. You know this. You hide the scant data that is there with delusions. This is propaganda. It continues:
“This encapsulates Charles Darwin’s theory of natural selection,”
No, he wrote a whole book. Look at the subtitle to The Origin of the Species.
Natural selection is about death and mortality, which you have not studied. Disease is not death.
“where individuals with characteristics that increase their probability of survival”
how? like being able to give birth?
“will have more opportunities to reproduce,”
in a limited dating pool
“according to the University of California, Berkeley’s Understanding Evolution.”
If California understood evolution, it would be Alaska.
“As a result, their offspring will benefit from the variants,”
no, not if they’re the more common disadvantageous mutations or if the combination is novel and fatal
“which will spread throughout the population.”
No, you’re assuming they breed. Infertility exists, and it exists on a spectrum.
“This is an increased risk equivalent to smoking, advanced maternal age or obesity.”
“While other research has found the mother’s ethnicity places a role in the risk of a stillbirth, this has largely been put down to factors related to migration and social disadvantage. What our research shows is women born in South Asia and giving birth in Australia are at increased risk even when other factors are taken into account.”
“There is growing evidence to suggest a mother’s ethnicity influences how fast her placenta ages as her pregnancy progresses.”
Asian placenta is old, got it.
“For some women, they can go into spontaneous labour sooner. In our study, we found South Asian-born women went into labour a median one week earlier than Australian- or New Zealand-born women.”
Racial differences in gestation duration, again.
“However, for others, an ageing placenta cannot meet the fetus’ increasing metabolic needs at term and beyond. And this increases the risk of stillbirth.”
Infertility, insufficient maternal resources for the fetus. That’s a kind of infertility. Considering how skinny they are and how those female curves are supposed to feed a baby, historically, this is not surprising.
Nature is aborting babies that would starve. Before it kills the mother too.
“And the length of telomeres in placentas from pregnancies ending in stillbirth are two times shorter than those from live births. In other words, the placental cells had aged faster.”
Superior Asian genetics people might wanna cover their innocent eyes.
“Some researchers have also studied ethnic differences in placental telomere length.
In an American study, placental telomeres from pregnancies in black women were significantly shorter than from pregnancies in white women (the ethnic backgrounds of the women were not further defined in the study).”
Superior European placentas. As you’d expect for the one race hit hard by an Ice Age. Perhaps this is an unknown r/K variable.
“Whether telomeres are shorter in placentas from pregnancies in South Asian-born women is unknown.”
Oh, I think I can guess.
“There was a high prevalence of stillbirth in this multi-ethnic urban population. The increased risk of stillbirth observed in non-White women remains after adjusting for other factors.”
Whites are different? Biologically? Shudder-gasp!
Let’s see if BMI matters.
Yes. Of course it does. They only studied high BMI though.
“However, BMI does not take into account the relative proportions of fat and lean tissue and cannot distinguish the location of fat distribution”
“However, these are based on information derived from the general population, based on risk of mortality, without consideration for racial or ethnic specificity and were not determined to specifically identify those at risk for diabetes. Recently, the U.S. Centers for Disease Control and Prevention presented initial findings from an oversampling of Asian Americans in the 2011–2012 National Health and Nutrition Examination Survey. These data, utilizing general population criteria for obesity, showed the prevalence of obesity in Asian Americans was only 10.8% compared with 34.9% in all U.S. adults (13). Paradoxically, many studies from Asia, as well as research conducted in several Asian American populations, have shown that diabetes risk has increased remarkably in populations of Asian origin, although in general these populations have a mean BMI significantly lower than defined at-risk BMI levels (14,15). Moreover, U.S. clinicians who care for Asian patients have noticed that many with diabetes do not meet the published criteria for obesity or even overweight.”
So we’d need to look at WHR, instead of BMI.
“In women, the connection between WHR and health measures appears to be hormonal. It is known that ratios of estrogen, progesterone, and prolactin affect all of these features. The “right” balance promotes both health and low WHR. One version of the “attractiveness theory” posits that our attraction to this body shape developed as an indicator of overall health.”
“Another crucial part of the attractiveness theory of wait-hip-ratio (WHR) is that this body shape has to be indicative of something related to fertility, or else it wouldn’t have any evolutionary value.
The key feature in a potential mate is biological fitness, that is, the potential to give birth to many healthy and successful offspring.
Desirable females, in the evolutionary sense, are those that are likely to be healthy, fertile, and robust.
Robust = pelvis, btw.
Venus was never a narrow-hipped vixen.
The body acceptance people should really focus on the hips.
A low WHR, it is thought, must correlate with fertility (ability to have children) and/or fecundity (tendency to have large numbers of children).”
There is such a thing as too low. Boyish figures have less fat, fewer curves and narrower hips.
They’re confusing women who have obesity and babies for State money with natural attractiveness, fecundity in the state of nature and blurring BMI with WHR. Nobody said unhealthy (low) WHR is wealthy, for fecundity. That’s a strawman. The hormones and other details, medical details, are better profiled in the most nubile WHR range. It is a range. Don’t line graph me, study.
It doesn’t mention race although many women in the world do not have a figure. Unless you count a figure of 1.
Hormones and junk: http://www.independent.co.uk/life-style/health-and-families/health-news/health-what-a-man-cant-resist-the-perfect-waist-hip-ratio-forget-about-breasts-says-jerome-burne-its-1440859.html
“The waist is one of the distinguishing human features, such as speech, making tools and a sense of humour,’ says Professor Singh. ‘No other primate has one. We developed it as a result of another unique feature – standing upright. We needed bigger buttock muscles for walking on two legs.”
If the waist makes the human, a lot of women are fucked.
“The ideal ratio in healthy pre- menopausal women ranges between 0.67 and 0.8. In terms of the tape measure, this is produced by waists between 24in and 28in with 36in hips, and waists between 27in and 31in with 40in hips.”
…How many Asian women have a 36″ hip?
The fat ones I’ve seen were pufferfish.
“come puberty, the sex hormones start directing it differently.”
“Oestrogen, the hormone of female sexual characteristics, concentrates it on the buttocks and hips while the masculinising hormone testosterone encourages fat to form around the waist.’ At the same time testosterone encourages fat to be burnt off the buttocks while oestrogen takes it off the abdomen.
These characteristically feminine fat stores are used in the last months of pregnancy and during breast-feeding. This is another reason why women who are seriously underweight often stop menstruating – they would not have the resources to support a pregnancy or a baby.”
“Women with a low ratio, Professor Singh says, tend to start ovulating younger, and those with a high ratio find it more difficult to become pregnant and tend to have children later. [not by choice]
Although a high waist-hip ratio most commonly goes with being overweight, it can also be found in women of normal weight who have high testosterone levels – a condition that is also associated with being hairy, infertile and having a ‘male’ body shape.”
Manly body, fertility problems. Study it. Avert tragedy.
“In a survey of 106 men aged 18 to 22, the favourite was a female of average weight with the classic hour-glass figure. Not only were such women rated as young, sexy and healthy, they were also seen as ideal for childbearing.”
Again, sexy is different from beautiful.
Porn is a lie.
“The young men regarded the underweight women – defined as women of 5ft 5in weighing less than 90lb – as ‘youthful’ but not particularly attractive, especially for childbearing.”
To prefer the obese over the mannish figured for motherhood is huge.
Youthful is code for making them feel like a pedophile.
“In Professor Singh’s other surveys, men of all ages agreed with these findings – thus bearing out her theory of the waist-hip ratio.”
Women dropped the corset to signal they weren’t just baby-making machines.
It’s hard to test low-WHR women in a world of obesity.
“Women who were extremely underweight or overweight were not included.”
Study them separately?
Porn is making you drawn to infertile women, with boy hips. Conditioning.
“Figures of average weight and a WHR of 0.7 were rated as most attractive and healthy.”
It is important.
I want to see a study that looks at racial WHR against pregnancy issues.
Is that so hard to ask?
“These data indicate that BF% appears to be a strong cue for attractiveness and that the impact of WHR and BMI on attractiveness is dependent, in part, on BF%. The appearance of body fat may provide disruption in the visual cues of both shape and size of the female body, potentially impacting behavior.”
Speciation is determined by biological compatibility in sum. This includes many factors. On none I have seen do Asian-White hybrids succeed over their parental groups’ averages; even IQ gains, if true, would be worse for the individual’s own fertility rate.
The only other thing I could think of is a study on STD rates between couples.
“The association between travel and STDs has been known for centuries”
What’s the Asian version of burn the coal? Pick the chopstick, get ripped?
Prevalance: “fairly common.”
The wages of sin. You can’t blame the white man.
Syphilis present in Asian archaeological samples.
‘referred to as “the intraracial network effect,”’
Oh, that’s why they don’t study it.
“suggest that assortative mixing prevents the spread of STI to other subpopulations.”
“A number of studies in the literature, many of which did not measure biomedical markers of STI, suggest that mixing across subpopulations may contribute to spread of STI in the population, particularly across subpopulations.”
If you increase the microbe’s exposure to different parts of the human genome, it will evolve faster. Simple?
Age groups can be a larger factor, since the older immune system is weak and better for the microbe.
“In a recent study conducted in Seattle we found that most of the disease burden for gonococcal and chlamydial infections in both high prevalence and low prevalence subpopulations was attributable to mixing within the subpopulations”
I think we’ve found the reason white women mix out the least. Same reason we don’t like to eat meat raw – to avoid disease.
‘the proportion of infection attributable to indirect mixing, or so called “bridge populations,”
So it is attributable and naturally must inform sexual behaviour.
“While we found that sexual mixing between particular racial ethnic subpopulations increased the risk of STI significantly, the proportion of the population engaging in sexual mixing, and the numbers of sex partners reported by individuals engaging in sexual mixing across racial-ethnic subpopulations were too low for this increased risk to play a major part in disease burden.”
Hybrid vigour, guys!
The risk isn’t the major part, it’s fine! Water’s fine!
“The literature on racial-ethnic differentials in STI rates and the role of racial ethnic mixing on the spread of STI is emergent; many questions still remain unanswered.”
If miscegenation were unhealthy, we’d know, right? | <urn:uuid:7331deee-f2e8-4f92-b4a6-c96f2bfbab70> | CC-MAIN-2018-13 | https://disenchantedscholar.wordpress.com/tag/biomedical/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647251.74/warc/CC-MAIN-20180320013620-20180320033620-00396.warc.gz | en | 0.938342 | 4,555 | 2.609375 | 3 | {
"raw_score": 2.9144184589385986,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
To write a substantive analysis on the topic an analytical write-up is used. You can write an analysis on almost anything, whether it be arts, music, politics, life, contemporary affairs, scientific research, philosophy, history, and many other topics. Such kind of write-up is the most favorite one among teachers.
Teachers use analytical write-up to assess their students. An analytical essay makes it easy for teachers to know where their students stand in framing opinions. It enhances your ability to critically think and formulate opinions. Analytical write-ups are considered very important in academic settings.
Newspapers, trade journals, academic journals, and magazines include analytical write-ups. Mostly the opinions in the newspapers are best considered to be ideal for reading and practicing how to formulate an essay. A perfect write-up of such kind should be contextualized and explained with basic information for the reader.
There are basic purposes two write an Analytical paper one serves the writer and the other serves the reader. The writer gets his writing and thinking skills enhanced while the reader gets to know about the topic. Analytical write-ups are mostly written on current topics so you need to be experienced enough to explain them with perfection. You are advised to hire an essay writer for you who has experience in this domain. You can learn from him and with time, you will be able to do it with perfection yourself.
Some of the basic techniques to write an analytical write-up are as follow;
How to make an analysis
Writing an essay is a different thing but to perform an analysis on a specific topic is different. You have to convey your point of view regarding the topic under discussion, using a specific theory that explains that topic. You need a historical background regarding that topic and then try to contextualize the future
Point of view
If you have to write down an analysis on a certain topic you have to be precise in your point of view. You first need a point of view to start work on. That point of view is evident throughout your thesis statement.
A perfect introduction
You have to be simple in the choice of words, especially for the introductory paragraph. Use some hooks and attract the reader. Also, use some background knowledge for the readers who are new to this subject. You have to conclude that introduction with your main argument or the thesis statement. For professional help, ask someone to write my paper.
The body of your essay should be well organized, with the right things at the right place. You have to keep your transitions smooth. You should write everything with proper meaning and it should be properly structured. All the paragraphs you are writing must be aligned with the thesis statement. The scope of the essay determines the number of paragraphs in the body.
The essay should give proper meaning. It should be written using simple words so that everyone could read it easily. While starting a paragraph a topic sentence is used, it helps in introducing the content of the paragraph. You must use some hooks in this sentence.
Use of evidence
While writing an analysis if you are not giving proper reference or evidence of the argument then it is not considered academically correct. If you are not using proper evidence for what you are writing it means that you are simply claiming something like every other normal person. You can also get help from your seniors by asking them to write essay for me.
Space for contrasting opinions
If you want your argument to be strong you must use another point of view to get help. It doesn't matter if you agree with it or not but it should be relevant and credible. Using that argument you can set the pace. You have all the choices; either you can agree or disagree with that argument.
In the end, you have to summarise what you have written in the essay so far. The conclusion should not be detailed; it should precisely discuss the main points again. End your essay with a positive vibe and always try to stay to the point. Such write-up can only be handled by a professional writer so you should hire a paper writing service, it would help you learn. With the necessary experience, you can do it by yourself.
The above-mentioned characteristics of analytical write-up are very important. They improve your essay and can result in good grades and attention towards your work.
Exclusive access to the "EssayWritingService" Learning Center. You’ll get weekly tips and tricks for improving your own writing and for achieving academic success through your writing.
"EssayWritingService" is the #1 Ranked Online home for great academic writing, essays, research papers, and graduate theses.Why us? | <urn:uuid:1896d655-a4ec-4a19-9477-396296246df1> | CC-MAIN-2021-43 | https://www.essaywritingservice.college/essay-examples/writing-techniques-for-an-analytical-essay | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00186.warc.gz | en | 0.945315 | 945 | 2.875 | 3 | {
"raw_score": 1.9347089529037476,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Education & Jobs |
The World Resources Institute has issued a report that states BRT is better than LRT for the Purple Line. The question is how they came to this conclusion. It's littered with the usual objections to light rail with a few new ones for good measure. My favorite quip is the "we like light rail but not in this instance" which we've seen about a million times before. In the report, they even admit to thinking short term.
Major capital projects implemented in the near-term will shape the long-term future of transport in the region. WRI urges regional planners and other decision makers to consider current needs and concerns in the context of tomorrow’s transportation challenges, especially regarding traffic congestion, fuel costs, and climate change.So what you're saying is that we should look at everything? Well you forgot a few things guys, like changes in development patterns, particulate matter and lifecycle costs in terms of construction. Replacing all the buses every 12 years is always good for the environment. Another annoying FTA related issue is the no build alternative. It's not really a no build but rather a basic bus service. Of course incremental change from a bus line to BRT is going to be more "cost effective". The other bus line doesn't even exist! Then there is this:
As illustrated in Figure 7, only the Medium and High Investment BRT alternatives reduce CO2 emissions, with 8,883 and 17,818 fewer metric tons per year, respectively, compared to the No Build scenario. All of the remaining alternatives increase annual emission levels compared to No Build.Again. The no build doesn't even exist, so how is the BRT line reducing emissions and LRT isn't? Well the truth is it is reducing emissions because the alternative isn't the no build but rather nothing at all. Both lines reduce GHGs in the transportation sense. What we don't know is exactly what the reductions in VMT are going to be from land use and whether the land use patterns will create more incentives to walk, creating even less car trips and development patterns that themselves save infrastructure and energy costs. Not to mention they say nothing about particulates from a single source of pollution versus multiple sources that spew along a whole corridor.
Energy consumption from roadways decreases with introduction of LRT, but the resulting emissions reduction is not sufficient to counterbalance the effect caused by the high electricity CO2 emission factor. While we anticipate that this emission factor will decrease in the future due to increased use of renewable energy sources and likely GHG reduction legislation, these drivers have not been included in the AA/DEIS. Further consideration is given to the electricity emission factor in the following sections.
In all reality, the Purple Line should be a subway. Bringing it down to light rail is bad enough, but all the way down to bus rapid transit would be a wasted opportunity to change the corridor. But for once, could someone do an analysis that includes land use change, the issues of air pollution, the real lifecycle costs? This analysis shows how much affect the FTA policy has on what our future will look like, and that is upsetting. Let's stop leaving out the whole picture. | <urn:uuid:3164c886-b58b-4981-bbdb-d941f637582e> | CC-MAIN-2016-50 | http://theoverheadwire.blogspot.com/2009/01/leave-something-out.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541896.91/warc/CC-MAIN-20161202170901-00221-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.950309 | 645 | 2.625 | 3 | {
"raw_score": 2.9604434967041016,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Transportation |
By combining two treatment strategies, both aimed at boosting the immune system's killer T cells, Johns Hopkins researchers report they lengthened the lives of mice with skin cancer more than by using either strategy on its own. And, they say, because the combination technique is easily tailored to different types of cancer, their findings -- if confirmed in humans -- have the potential to enhance treatment options for a wide variety of cancer patients.
"To our knowledge, this was the first time a 'biomimetic,' artificial, cell-like particle -- engineered to mimic an immune process that occurs in nature -- was used in combination with more traditional immunotherapy," says Jonathan Schneck, M.D., Ph.D., professor of pathology, who led the study together with Jordan Green, Ph.D., associate professor of biomedical engineering, both of whom are also members of the Kimmel Cancer Center.
A summary of their study results will be published in the February issue of the journal Biomaterials and is available online now.
Scientists know the immune system is a double-edged sword. If it's too weak, people succumb to viruses, bacteria and cancer; if it's too strong, they get allergies and autoimmune diseases, like diabetes and lupus. To prevent the immune system's killer T cells from attacking them, the body's own cells display the protein PD-L1, which "shakes hands" with the protein PD-1 on T cells to signal they are friend, not foe.
Unfortunately, many cancer cells learn this handshake and display PD-L1 to protect themselves. Once scientists and drugmakers figured this out, cancer specialists began giving their patients a recently developed class of immunotherapy drugs including a protein, called anti-PD-1, a so-called checkpoint inhibitor, that blocks PD-1 and prevents the handshake from taking place.
PD-1 blockers have been shown to extend cancer survival rates up to five years but only work for a limited number of patients: between 15 to 30 percent of patients with certain types of cancer, such as skin, kidney and lung cancer. "We need to do better," says Schneck, who is also a member of the Institute for Cell Engineering.
For the past several years, Schneck says, he and Green worked on an immune system therapy involving specialized plastic beads that showed promise treating skin cancer, or melanoma, in mice. They asked themselves if a combination of anti-PD1 and their so-called biomimetic beads could indeed do better.
Made from a biodegradable plastic that has been FDA-approved for other applications and outfitted with the right proteins, the tiny beads interact with killer T cells as so-called antigen-presenting cells (APCs), whose job is to "teach" T cells what threats to attack. One of the APC proteins is like an empty claw, ready to clasp enemy proteins. When an untrained T cell engages with an APC's full claw, that T cell multiplies to swarm the enemy identified by the protein in the claw, Schneck explains.
"By simply bathing artificial APCs in one enemy protein or another, we can prepare them to activate T cells to fight specific cancers or other diseases," says Green, who is also part of the Institute for NanoBioTechnology, which is devoted to the creation of such devices at Johns Hopkins.
To test their idea for a combined therapy, the scientists first "primed" T cells and tumor cells to mimic a natural tumor scenario, but in a laboratory setting. In one tube, the scientists activated mouse T cells with artificial APCs displaying a melanoma protein. In another tube, they mixed mouse melanoma cells with a molecule made by T cells so they would ready their PD-L1 defense. Then the scientists mixed the primed T cells with primed tumor cells in three different ways: with artificial APCs, with anti-PD-1 and with both.
To assess the level of T cell activation, they measured production levels of an immunologic molecule called interferon-gamma. T cells participating in the combined therapy produced a 35 percent increase in interferon-gamma over the artificial APCs alone and a 72 percent increase over anti-PD-1 alone.
The researchers next used artificial APCs loaded with a fluorescent dye to see where the artificial APCs would migrate after being injected into the bloodstream. They injected some mice with just APCs and others with APCs first mixed with T cells.
The following day, they found that most of the artificial APCs had migrated directly to the spleen and liver, which was expected because the liver is a major clearing house for the body, while the spleen is a central part of the immune system. The researchers also found that 60 percent more artificial APCs found their way to the spleen if first mixed with T cells, suggesting that the T cells helped them get to the right spot.
Finally, mice with melanoma were given injections of tumor-specific T cells together with anti-PD-1 alone, artificial APCs alone or anti-PD-1 plus artificial APCs. By tracking blood samples and tumor size, the researchers found that the T cells multiplied at least twice as much in the combination therapy group than with either single treatment. More importantly, they reported, the tumors were about 30 percent smaller in the combination group than in mice that received no treatment. The mice also survived longest in the combination group, with 45 percent still alive at day 20, when all the mice in the other groups were dead.
"This was a great indication that our efforts at immunoengineering, or designing new biotechnology to tune the immune system, can work therapeutically," says Green. "We are now evaluating this dual strategy utilizing artificial APCs that further mimic the shapes of immune cells, such as with football and pancake shapes based on our previous work, and we expect those to do even better."
Other authors of the report include Alyssa Kosmides, Randall Meyer, John Hickey, Kent Aje and Ka Ho Nicholas Cheung of the Johns Hopkins University School of Medicine.
This work was supported in part by grants from the National Institute of Allergy and Infectious Diseases (AI072677, AI44129), the National Cancer Institute (CA108835, R25CA153952, 2T32CA153952-06, F31CA206344), the National Institute of Biomedical Imaging and Bioengineering (R01-EB016721), the Troper Wojcicki Foundation, the Bloomberg~Kimmel Institute for Cancer Immunotherapy at Johns Hopkins, the JHU-Coulter Translational Partnership, the JHU Catalyst and Discovery awards programs, the TEDCO Maryland Innovation Initiative, the Achievement Rewards for College Scientists, the National Science Foundation (DGE-1232825), and sponsored research agreements with Miltenyi Biotec and NexImmune.
Under a licensing agreement between NexImmune and The Johns Hopkins University, Jonathan Schneck is entitled to a share of royalty received by the university on sales of products described in this article. He is also a founder of NexImmune and owns equity in the company. And he serves as a member of NexImmune's Board of Directors and scientific advisory board. Jordan Green is a paid member of the scientific advisory board for NexImmune and owns equity in NexImmune. The terms of these arrangements have been reviewed and approved by The Johns Hopkins University in accordance with its conflict of interest policies. | <urn:uuid:3a921648-f10b-407f-87fb-4df9908a9fa7> | CC-MAIN-2018-09 | https://www.eurekalert.org/pub_releases/2016-12/jhm-dst122016.php | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817999.51/warc/CC-MAIN-20180226025358-20180226045358-00238.warc.gz | en | 0.95609 | 1,542 | 2.890625 | 3 | {
"raw_score": 3.008204698562622,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
Although diamonds are supposed to be forever, it’s the human love affair with gold that has been truly lasting. For thousands of years, since even before the time of King Tut, gold has been prized for its beauty and value. It’s no wonder, then, that so much gold is sitting in jewelry boxes and central bank vaults. The U.S. Geological Survey estimates that 171,300 tons (PDF) of gold have been mined throughout history. And that total is rising by about 3,000 tons a year.
Nothing's wrong with gold. The problem is that gold mining is extremely bad for the environment. Modern gold mining methods generate about 20 tons (PDF) of toxic waste for every gold ring. Gold mining pollutes the air and water with toxic substances such as cyanide and mercury. The leading cause of mercury pollution today, ahead of even coal-fired power plants, is gold mining. And it’s partly because of gold mining that the Amazon rainforest is being destroyed and that mercury levels in fish are so high.
How can the world continue to enjoy gold without wreaking havoc on the planet? One solution is for gold miners to use more eco-friendly mining methods. They could stop using mercury or stop dumping their toxic waste (PDF) in rivers and oceans. But there’s an even better solution: the world simply could recycle more of the gold we already have.
The promise of recycled gold
Recycling gold makes a lot of intuitive sense. Gold can be recycled with no degradation in quality, so gold originally mined centuries ago is just as good as new. It can be recycled and repurposed without the need for any new mining at all.
And huge amounts of gold are on hand. Only a tiny amount of the gold that has been mined – about 3,600 tons – has been lost. The rest, totaling 167,700 tons, is still available. About half of that has been incorporated into jewelry. The rest has been locked in the vaults of central banks, held by private investors or used to make other products such as iPhones and dental fillings.
In 2012, if the world had re-used or recycled less than 3 percent of existing gold supplies, it could have satisfied 100 percent of global demand. And by recycling about 5 percent of gold jewelry, all the world’s gold needs could have been met last year.
What’s stopping the world from recycling or re-using more gold? Wouldn’t it be smart to use the gold we already have before digging any more gold mines so big they can be seen from space?
The recycled gold percentage
Actually, a lot of gold does come from recycled sources. In 2012, according to World Gold Council statistics, about 36 percent of the gold supply consisted of gold from existing jewelry and other products such as electronics.
That’s a start, but the current percentage may be temporarily inflated by unusual circumstances: a combination of high gold prices and weak economies. (Historically, about 30 percent of gold has been recycled.) Those factors have given people an incentive to exchange more gold jewelry for cash. From an environmental perspective, however, high gold prices are nothing to cheer about. The high price of gold has led to much more gold mining – and much more pollution overall.
Ideally, it would be possible to rely more on existing gold supplies and reduce the total amount of gold mining. What is the best way to begin?
Here’s one tempting solution: empty the vaults. Central banks and investors are currently net buyers, not sellers, of gold. But suppose banks and investors decided to sell off their gold and invest in something more eco-friendly – say, forests. All of this gold easily could meet the world’s jewelry and technology needs for 15 or 20 years.
Assuming gold prices continue to fall, something like this may happen. Jittery investors may start to sell more of their gold, reducing the need for gold mining. On the other hand, gold investment trends are driven by factors that are difficult to predict or control, such as the strength of the world economy. A better long-term approach might be to focus on the jewelry market.
The solution in your jewelry box
Two factors make jewelry a good place to start. First, the single biggest use for gold remains jewelry, not investing. (In the most recent economic quarter, jewelry made up 67 percent of gold demand.) Producing more jewelry from recycled gold therefore would prevent a lot of gold mining. In addition, jewelry is the biggest potential source of recycled gold, because it accounts for about half the gold sitting above ground.
Jewelry also has the advantage of being a product sensitive to consumer tastes. Witness what happened when Prince William gave Kate Middleton his mother’s sapphire and diamond engagement ring. Demand for sapphires shot up.
Suppose newly mined gold became no longer acceptable in luxury jewelry. Suppose the world’s major jewelers committed to using only recycled precious metals. And suppose all jewelers made it a practice to accept gold jewelry for recycling, perhaps giving customers a credit toward the purchase of new jewelry. The fundamentals of the gold market could begin to shift. Demand for newly mined gold would drop. Recycled gold supplies would go up. Gold mining would become less economical, and there would be much less need for it.
Could anything like this really happen? Ethical considerations already are part of the equation when consumers shop for diamonds. More people are avoiding blood diamonds and choosing diamonds with ethical origins. Consumer pressure could – and probably will – start to create a bigger role for recycled gold.
The only question is how long it will take before consumers begin to flex their muscles – and how much environmental damage will have been done before that happens.
Image of rings by Bangkokhappiness via Shutterstock | <urn:uuid:6539c9ef-d70b-4456-8349-2282e6a21ece> | CC-MAIN-2014-23 | http://www.greenbiz.com/blog/2013/08/27/can-recycling-gold-help-metal-regain-its-environmental-shine | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889001.72/warc/CC-MAIN-20140722025809-00100-ip-10-33-131-23.ec2.internal.warc.gz | en | 0.962157 | 1,202 | 3.296875 | 3 | {
"raw_score": 2.9675021171569824,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Industrial |
Are you someone who suffers from asthma and wish you had more information on this subject? Many people want to know more information about asthma but just do not know where to find this information. What you’re going to read in this article about the disease might surprise you.
It is important that you do your best to avoid cleaning chemicals if you have asthma. A lot chemicals that are in cleaners tend to trigger asthma attacks and symptoms. If you are responsible for cleaning your residence, you should safer, natural products.
Avoid the things that you know can trigger your asthma. For some people, allergens like dust and pollen, such as a reaction to dust or pollen. Others may have asthma attacks when they participate in physical activities. Try and figure out what gets your asthma began so it can be avoided.
If you have mild to moderate asthma attack, you need to try to force air out of the lungs. Breathe out quick and hard. You want to force the air out. Inhale for three quick breaths, followed by a deeper one, before exhaling with force again. This method forces you to pay close attention to all of your breathing and create a steady rhythm. It also help to get the air to come out of the lungs so more can enter. You may generate sputum, since your main objective is getting you to breathe normally again.
Cigarette smoke and asthma worse.Avoid fumes of chemical products or breathing harmful vapors. This can trigger an Asthma attack that you might not be able to stop. If others are smoking nearby, remove yourself as quickly as possible.
Learn how to properly use an inhaler in the correct manner if you do not already know. The inhaler can only can help if it’s medicine reaches the lungs. Spray the stated dose of medicine into your mouth while inhaling air. You should hold your breath 10 seconds at a minimum to let the medicated mist is able to fill up your lungs.
People suffering from asthma should avoid using scented household products. Products with fragrance, such as perfumes, colognes, and air fresheners, introduce irritants into the air around you that can trigger your asthma. Fresh pain and new carpeting also produce smells that are irritable to the airways. Try to maintain fresh air inside your house as free from possible asthma triggers as possible.
It is no surprise that newly diagnosed asthma sufferers want to learn all they can about their condition. This article has provided essential information about asthma so that now, you can ease the stress of having this disease with knowledge. Use the advice from this article in your own life, and you just might find living with asthma is not as bad as it seems! | <urn:uuid:95948a0e-4563-4d8d-9b43-6d47e3a32606> | CC-MAIN-2021-21 | https://joyblissraw.com/tips-to-help-you-take-care-of-your-asthma/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00557.warc.gz | en | 0.956892 | 554 | 2.859375 | 3 | {
"raw_score": 1.6794930696487427,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Health |
PAST TIMES BY JOHN ASHTON
Since human beings have settled in Pictou County, unpredictable natural disasters have caused loss of life and great anguish and property damage.
Weather forecasting in our area up until the 1840’s was mostly done by observing cloud formations, folklore, guessing and even animal behavior. Many popular weather customs and beliefs have been passed down from generation to generation and some are even used to this day to predict the weather, with some accuracy.
Some old weather lore phrases that are still used today are; “Red sky at night, sailors delight. Red sky in morning, sailors warning,” “Big snow, little snow. Little snow, big snow” and “Sudden storm lasts not three hours. The sharper the blast, the sooner ‘tis past.”
On the morning of July 25, 1895, thunder could be heard off in the distance. This might have been a good sign for the much needed rain in Pictou County. The area had been through a particular dry period all month long. “Brooks and streams were at their lowest ebb, scarcely enough water to make a current.”
The much needed rain did come that early afternoon and with it came one of the worst wind, thunder and hail storms to hit the eastern portion of Pictou County in nearly a century. This weather catastrophe made headline in papers across Canada and United States. In the San Francisco Call the banner heading read “Swept by a Tornado, A Terrific Storm Causes Destruction in Nova Scotia.” In the local Eastern Chronicle newspaper, it was described as “the worst storm ever known in this section.”
The McLellans Mountain and Brookville areas were the hardest hit. This tempest seemed to come out of nowhere, from the south to north east. The sky becoming very dark just after dinner and around 1:30 p.m. the calamity began. “It seemed that the clouds were opened for a time and the torrents were more like a person emptying buckets of water than anything else imaginable.”
The rain came down so hard that people ran for shelter immediately. The deluge of water laid a path of destruction, deep ravines cut into streambeds and hill sides, some measuring 10 feet deep. Every waterway in the area turned into a raging surge. Small streams that were mere trickles of water days before became torrents of destruction, carrying away soil, timber, fences, rocks and trees. Crossing points and bridges were swept away, only to be found many miles away in mangled pieces. In one report “at least a foot of water fell over an area of a mile and a half.” Streams rose to unseen heights and some bridges were completely submerged under a foot of water. Great sheets of lightning accompanied the deluge striking several houses and barns in the vicinity.
The farming communities across the storm’s path were devastated. Proud standing crops, vegetable gardens were flattened and strewn across fields or carried away with the rapid rush of water. A massive hailstorm with some of the hailstones measuring one and a half inches pelted the area.
The ground was so white at a depth of four inches, which to all appearance was in good condition for first class sleighing,” said one farmer about the incident. The hailstones lay on the ground well into the next summer day.
One of the local farmers fearing for his grazing cattle, ventured out to save them. He had pastured them in a low-lying area of his property. The farmer had to turn back; the rushing water was well up to his waist. After much anguish and to the family’s surprise, his treasured bovines swam through the flood and made it to safety. A 20-chicken hen house that was 30 feet from the brook was washed away. An old gristmill that had been through many a storm was carried downstream and its timbers strewn for miles across fields or tangled up on newly gouged embankments.
The people of these communities were eternally grateful that there was no loss of human lives. Property damage in the immediate area was estimated to be in the thousands.
John Ashton of Bridgeville is a local historian and the province’s representative to the Historic Sites and Monuments Board of Canada. | <urn:uuid:77eb7636-1fbd-433b-94bc-60d30c351d28> | CC-MAIN-2015-35 | http://www.ngnews.ca/News/Local/2014-06-15/article-3764411/McLellans-Mountain-flood-of-1895-was-devastating/1 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646249598.96/warc/CC-MAIN-20150827033049-00196-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.979913 | 902 | 2.90625 | 3 | {
"raw_score": 2.7361018657684326,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | History |
CLINTON, Miss.—When Kelsi Collins was first given a laptop last year at Clinton High School, she hesitated to change from years of reading textbooks and writing assignments by hand to researching topics and typing papers online. It didn’t help that, after she’d ignored teachers’ warnings to back up her work, her computer crashed and she lost ‘everything’ just nine weeks into the school year.
Still, within a few months, Collins was hooked.
“I use it for absolutely everything,” said Collins, who will start her senior year in August. “I don’t think I could go back to a textbook.”
The partly rural, partly suburban Clinton Public School District, in central Mississippi, is regarded by many districts as a model when it comes to technology use in classrooms. Every student in grades K-12 has an iPad or a laptop, and kids in grades 6-12 have a special backpack for carrying the device home. Enrollment in the district has increased by nearly 300 students since the 2011-12 school year, which some say is due to the allure of the technology. Administrators from other school districts have eagerly studied Clinton to learn how to implement their own digital learning programs.
But Clinton’s success has yet to be replicated to a large degree in the poorest and most rural parts of Mississippi — the “least-wired” state in the country according to a 2011 Census survey. More than half of Mississippians have no Internet at home, and 41 percent have no access to the Internet at all.
Mississippi is so far behind on technology use in schools, it earned an “F” on a“digital report card” published this year byDigital Learning Now, a group that advocates for more online learning. The rankings examined whether schools have high-speed broadband, whether teachers and students have Internet-capable devices, and whether the states have metcertain benchmarks to ensure effective use of technology.
In Mississippi, this technology access gap only compounds the state’s most persistent educational problems. In the 2011-12 school year, only 75 percent of students graduated in four years, compared to the national average of 80 percent. After students graduate, they often struggle to find jobs. Nearly 20 percent of youth ages 16 to 24 are out of school and not working, the highest rate in the nation.
Advocates say that access to the Internet and technology can close critical information gaps by helping students find college and scholarship information, job applications, and educational resources like study guides and practice tests.
David Conley, director of the Center for Education Policy Research at the University of Oregon says interaction with technology is also crucial to preparing students for the tasks they’ll be expected to complete in postsecondary education.
“Think about what’s going to happen to those young people when they try to go to a college class that expects them to use new technology,” Conley said. “Any type of a problem will stop them in their tracks.”
Digital education may be the future, but most American schools are far from ready. Our series examines the national effort to close the digital divide by connecting all American schools to high-speed Internet, and why so many schools still lag so far behind.
Uneven access to technology, Conley added, is only widening the gap between the haves and the have-nots.
Nationwide, schools that haven’t yet integrated technology often face a basic problem: TheirInternet connection is too weak and their laptops—if they even have them— are too old to handle whole classrooms of students spending most or even part of their day online.
Fewer than 20 percent of teachers said their school’s Internet connection meets their teaching needs,according to the White House. And according toa survey of schools by the Federal Communications Commission (FCC), half of schools and libraries that apply for federal subsidies have “lower speed Internet connectivity than the average American home — despite having, on average, 200 times as many users.”
In 2013 the Obama administration launched a new initiative, calledConnectEd, meant to increase broadband access, train teachers in how to better use technology and use model districts to demonstrate what works. But major obstacles remain, including the enormous costs of bringing more students, especially those in the most disadvantaged schools, online.
“We have some amazing schools and we have a lot of places where you can see this happening now. But we have a tremendous lack of equity,” said Karen Cator, president of Digital Promise, a nonprofit that helps school districts improve their use of technology. “We have a lot of work to do on this.”
In some cases, schools lack staff members with a knowledge of technology, and many face skepticism among educators, school board members and parents about whether technology can make enough of a difference to make the costs worthwhile.
Some districts in the state are more behind than others. A 2009 audit of the Tate County School District in north Mississippi found that the computers used for a vocational program were running Windows 3.0, a system from 1990.
In the past few years, some districts in the state have cobbled together funds from savings or grants to start programs like Clinton’s that provide a laptop or iPad to each student. This fall, students in pre-kindergarten through 12th grade will receive iPads or laptops in the Corinth School District, just south of the Tennessee border. In the Delta town of Clarksdale, the district will use a federal grant to roll out a one-to-one device program and provide technology to students who are behind in school.
In the Appalachian region of the state, which covers northeast Mississippi, some school districts have qualified for non-competitive grants from the Appalachian Regional Commission, a federal agency that provides financial support to areas in 13 states. Some districts like Water Valley, just south of Oxford, have used that grant money to buy computers and upgrade bandwidth so students can interact with more technology even though they don’t each have a laptop.
But for many schools in Mississippi, the priority with technology has been preparing for new online tests that will launch in 2015. The tests are aligned to the new Common Core standards, which Mississippi adopted in 2010.
Schools in Mississippi have been underfunded by the state by more than $1 billion over the past six years, which means many have struggled to buy basic supplies like pencils and paper while also upgrading bandwidth and computer labs. For many schools, it has been a strain on budgets to buy enough laptops for testing, and districts that do not have a surplus of funds or grant money have few options to expand technology.
In 2011 in Clinton, the district realized it would take a month for all its students to take the new online tests using the limited technology they had. Kameron Ball, director of technology for the Clinton Public School District, said that spreading the testing out across a month would give students who tested later more time to prepare than their peers, “just because the district didn’t have the technology.”
But Ball hesitated to adopt a “bring your own device” program, which has been embraced by some school districts across the country, allowing students to bring their own phone or tablet to school to use during lessons.
“Asking parents to add an iPad to their supply list didn’t seem equitable,” Ball said. “It just made sense to make sure all of our students received the same device.”
So the district bought nearly 2,700 laptops and about 2,500 iPads for students. It took years of preparation and planning to roll out the digital program. The district used savings and money from a local millage rate increase to fund the program, which cost more than $4 million.
More than 4,600 students attend Clinton’s schools, and about 45 percent of them receive free- or reduced-price lunch, a measure of poverty.
A team of administrators from Clinton visited a successful technology program in North Carolina, and then began to transition teachers from desktop computers to Apple laptops. All teachers received a laptop and had to be trained in the new technology, and the district held meetings throughout the community to invest parents in the idea. The district even hired several technology specialists. “I stole them from the Apple store,” Ball said with a smile.
While there’s no definitive evidence that technology improves suspension rates or behavior, Ball has noticed a positive change in student behavior since they received the technology. Suspensions in grades 6-12 decreased by nearly 30 percent in the first year of the program, and referrals to the office dropped as well.
Conley from the University of Oregon cautions that as schools roll out more technology, they need to ensure that it is used to help students be self-sufficient and productive.
“If you go to the average school the differences are often so great from classroom to classroom,” Conley said. While some teachers embrace it, Conley said that other teachers shy away from it or don’t really know how to use it within their classroom.
And even as districts in Mississippi add more technology, there’s no guarantee that it will improve education in the state. Research on the benefits of technology in classrooms is mixed, although some studies have found that using computers boosts student learning. A 2011 study found that technology can help students learn, although it tends to be more effective when technology supports student learning rather than directly delivering content or instruction.
Students at Clinton High School say that in most classes, they use the laptops to do research and type up reports and projects. In a biology class, students read the textbook online and complete interactive activities based on the material they’ve learned. About 40 percent of the textbooks used in the district are now digital or online, and the high school also adopted an online program that allows students to submit their work to their teachers and receive feedback.
Genesis Johnson, a 15-year-old at Clinton High School, said the computers have taught students more responsibility and self-discipline, and introduced them to basic Internet functions, like sharing documents on Google Drive, and using email. (Each student received a school email account.) “In elementary school they’re preparing you for junior high, in junior high they’re preparing you for high school,” Johnson said. “In high school you think you’re getting prepared for college, but you’re really not. Until you get those computers.”
A cautious approach in Greenville
The Delta town of Greenville overlooks the Mississippi river, nearly two hours northwest of Clinton. By many accounts, the Greenville Public School District’s precarious financial status makes it an unlikely early adopter of a technology program like Clinton’s. More than 93 percent of students qualify for free- or reduced-price lunch, and four of the district’s 10 schools received a failing grade from the state in the 2011-12 school year. Only about 21 percent of the district’s revenue comes from local sources, meaning the district must rely on federal aid or state aid to fund new initiatives.
But the poverty level made it even more imperative to start a technology program, said Leeson Taylor, superintendent of the Greenville Public School District. “We’re trying to be proactive and act on behalf of our students,” Taylor said.
Before he was superintendent, Taylor worked in the district’s federal programs division, where he helped the district find ways to be frugal and save money for the program. The district has used reserve money and federal Title I funding, which it receives for low-income students, to fund its new program. This fall, all students in sixth through twelfth grade will receive an iPad, while younger students will use iPads on carts in each classroom.
Administrators in Greenville have been wary of moving too fast with their program. After hearing about a district in another state where kids were robbed after receiving digital devices from schools, they delayed rollout until they could find volunteers to monitor children as they walked home after school. They also took note of Los Angeles Unified School District, which had to put a halt to their $1 billion iPad plan after a disastrous rollout where kids quickly figured out how to hack security settings and access non-educational content online.
Taylor said that some parents were skeptical of giving children such an expensive item, and not every teacher was enthusiastic about the change. “There are some model classrooms,” Taylor said, but also some classrooms where teachers had to be dragged “kicking and screaming into the 21st century.”
One of the district’s teachers who has embraced technology is Reginald Forte, a fifth grade teacher at Em Boyd Elementary who describes himself as “a tech person.” Forte said that he uses iPads most in math, science, and social studies to expand the amount and quality of information students can use.
“They have direct access to the latest information,” Forte said. “I may not know it, but they can go right to it.”
On one of the final weeks of school this spring, students in Forte’s class were working in groups on an end-of-year project. At a cluster of desks in the front of the room, fifth-graders Kiara McPherson and Jeremiah Hilliard were bent over iPads, searching through pictures on the Internet.
They were preparing a digital presentation about the properties of light, which they would later have to present to the class. Kiara clicked on a picture of a triangle prism and slid her iPad over to Jeremiah’s desk.
“Jeremiah, do you like this picture? It’s using refraction,” she said.
Jeremiah examined the picture closely. “Yes.”
With a flurry of motion, Kiara quickly downloaded the image, cropped the picture, and dragged it into the digital presentation she and Jeremiah were creating.
One of the goals of technology is to engage students, or make them more excited about learning, said Taylor, as he stepped into Forte’s class to watch the students’ presentations. “A lot of our kids are below the federal poverty level,” he added. “If we don’t build those experiences for them, then a lot of them will not have those experiences.” | <urn:uuid:952205f5-4ece-4145-a598-ca9fa6152585> | CC-MAIN-2017-39 | http://hechingerreport.org/content/mississippi-schools-access-technology-lacking-uneven_16660/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688932.49/warc/CC-MAIN-20170922093346-20170922113346-00467.warc.gz | en | 0.966063 | 3,005 | 2.875 | 3 | {
"raw_score": 3.0108964443206787,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Education & Jobs |
Although it is a ketogenic diet that is said to have a great diet effect. This diet aims to bring the body to a metabolic state called “keto” that uses fat instead of carbohydrates as an energy source to move the body, but it is practiced according to theory. keto diet food Seems to be difficult. Since there are many restrictions, it takes a strong will to continue, and even if you can lose weight.
What is a ketogenic diet?
The ketogenic diet is also called a ketone body diet and is a diet method that drives the body into a state where it must use lipids as energy.
Extremely low carbohydrate intake and consume 65% to 70% of calories from fat. Then, the body cannot use carbohydrates as an energy source, and instead enters a metabolic (ketosis) in which the fat stored in the body is decomposed and used.
Ketogenic diet rules
Restricted by the ketogenic diet are:
● Processed foods
● Low-fat foods
● Carbohydrate-rich fruits
● Vegetable oils
Instead, eat plenty of non-starch vegetables such as broccoli, asparagus, and spinach, lean meats and high-quality oiled fish, full-fat dairy products, and nuts and seeds.
Why is the ketogenic diet said to be effective in controlling blood sugar in type 2 diabetes?
Carbohydrates and other sugars contained in food are decomposed by digestive enzymes to become glucose, which is absorbed from the small intestine. Therefore, ingestion of sugar raises the blood sugar level. The ketogenic diet is a diet centered on high-fat, low-carbohydrate, and moderate protein. In a healthy person, when the blood sugar level rises, the insulin in the blood also rises in response, and the blood sugar level can be kept within a certain range, but when insulin does not work well, this control Will not work. Therefore, a diet that does not
The ketogenic diet has a low sugar intake, so after meals It can be expected that the blood sugar level and insulin level will not rise, which will reduce the burden on the pancreas and alleviate insulin resistance.
Insulin also has the function of lowering blood sugar levels and at the same time storing the amount of glucose that could not be used as energy in the body as fat. A low-carb diet reduces the need for insulin secretion, which reduces fat accumulation. In addition, when you are in a ketosis state, the source of energy for your body is not glucose, but body fat is decomposed to produce energy, so you can expect a dieting effect on petit fasting and ketogenic diets.
The Best Keto Diet Foods
Consciously eat foods that are low in sugar and rich in fiber and foods that contain omega-3 fatty acids.
● Avocado (avocado oil)
● Fatty fish (Fish oil)
● Olive oil
Benefits of a ketogenic diet
According to certified dietitian Beth Warren, “Ketogenic diets replace most of the sugar with proteins and lipids, so they are more satisfying after eating than other diets, including vegans.
Write a list of foods you can eat on your ketogenic diet.
Meat: All meat, chicken and more. It is recommended to eat the fat part of meat and chicken skin together to increase the intake of fat. If possible, herbivorous cows and organic meats seem to be preferred.
Seafood: All seafood, especially recommended fish, are fatty salmon, mackerel, sardines, and herring. Avoid fish cooked with bread crumbs or tempura flour (sugar). Of course, you can also eat sashimi fish such as tuna and yellowtail, and in addition, we also eat fish eggs (salmon roe, mentaiko, etc.).
Eggs: All eggs, cooked with boiled eggs, scrambled eggs, omelets, whatever. Avoid fried eggs with sugar or mirin.
Vegetables: other than root vegetables (ground vegetables): cauliflower, broccoli, cabbage, seed cabbage, kale, Chinese cabbage, spinach, asparagus, zucchini, eggplant, olives, mushrooms, cucumbers, lettuce, avocado, onions, peppers, Tomatoes, etc.
Dairy products: Butter, whipped cream, sour cream, Greek yogurt, high-fat cheese, and regular milk are also deprived of fat from the cream, so be careful because lactos sugar is high. Is required. Also, avoid flavored milk such as coffee milk.
Nuts: Pecan nuts, macadamia nuts, Brazil nuts, almonds, etc. are low-sugar and high-fat nuts, and other nuts can be eaten in small quantities.
Berries: Berries have relatively low sugar content among fruits, so small amounts can be eaten. Blueberries, raspberries, blackberries, strawberries, etc.
A balanced diet is possible even on a ketogenic diet. Of course, processed foods and poor quality fats such as vegetable oils and margarines are NG. Eat good protein and fat, eat a lot of green and yellow vegetables, try to supplement the missing nutrients, and do the right ketogenic diet to raise your awareness so that you can perform better every day and ED pills like Vilitra 60.
Which can be quite suitable to have on a keto food. Like various fruits. | <urn:uuid:9ddd53d3-d15d-4a2d-b2a5-dbbe2a9f19dc> | CC-MAIN-2021-43 | https://excitesubmit.org/keto-diet-food-good-for-health/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00614.warc.gz | en | 0.936302 | 1,125 | 2.84375 | 3 | {
"raw_score": 1.869309425354004,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Health |
By Alison Trinidad
For the first time, scientists and engineers have identified a critical cancer-causing component in the virus that causes Kaposi’s sarcoma, the most common cancer among HIV-infected people. The discovery lays the foundation for developing drugs that prevent Kaposi’s sarcoma and other related cancers.
“The mechanism behind the Kaposi’s sarcoma-associated herpesvirus (KSHV) that causes healthy cells to become malignant is not well understood despite two decades of intensive studies,” said S. J. Gao, PhD, professor of molecular microbiology and immunology at the Keck School of Medicine of USC and principal investigator of the study. “This is the first time that a viral factor has been shown to be required for KSHV-induced malignant transformation. We have identified a mechanism by which these tiny viral molecules cause the cells to become malignant.”
Distinguished by dark lesions on the skin, Kaposi’s sarcoma most commonly develops in people who are infected with KSHV and also have compromised immune systems.
Although many people infected with KSHV never show any symptoms, Kaposi’s sarcoma is a persistent problem in areas where HIV infection is high and access to HIV therapy is limited. More than 90 percent of the population in some areas of Africa shows signs of KSHV infection, according to the American Cancer Society.
Gao and colleagues from the University of Texas at San Antonio (UTSA) and University of Texas Health Science Center at San Antonio studied KSHV using a rat stem cell model they developed in 2012. Until then, researchers had been unable to study the virus because most healthy cells, once infected with KSHV, died before turning into cancer cells.
In this study, which appeared in the Dec. 26 edition of the peer-reviewed journal PLOS Pathogens, the team identified a cluster of viral microRNA molecules that are necessary to transform healthy cells into cancerous ones. When this microRNA cluster was suppressed, the cells died after they were infected with KSHV. Flipping the switch and turning the cluster back “on,” however, allowed the cells to stay alive and become malignant when infected with the virus.
Using advanced genomic methods, the researchers also found that the microRNAs target the IκBκ protein and the NF-κB cellular pathway, both of which are associated with cancer development.
“Our results suggest that this cluster of KSHV microRNAs and their regulated NF-κB pathway may be potential targets for new therapeutics of KSHV-related cancers,” said Gao, who is also a member of the USC Norris Comprehensive Cancer Center. “Several of the microRNAs appear to have redundant functions, so targeting their common pathways might be a more feasible approach. It would be interesting to test them in the KSHV-induced Kaposi’s sarcoma model.”
Yufei Huang, PhD, professor of electrical and computer engineering at UTSA, is the study’s co-corresponding author. Other USC authors include researchers Ying Zhu, PhD, and Tiffany Jones, PhD. Their work was supported by the National Institutes of Health (grants CA096512, CA124332, and CA177377). | <urn:uuid:b05b0191-74c8-43db-aac8-174a57be7793> | CC-MAIN-2021-17 | https://hscnews.usc.edu/viral-micrornas-responsible-for-causing-aids-related-cancer | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464045.54/warc/CC-MAIN-20210417192821-20210417222821-00099.warc.gz | en | 0.953636 | 698 | 3.21875 | 3 | {
"raw_score": 2.968839645385742,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
By Gilad Glick
- Mechanical Stretch due to negative Intra-Thoracic Pressure
Obstructive Sleep Apnea is defined as a physiological event that happens when upper airway is partially or completely blocked during sleep. Mostly, as a result of collapsed soft tissue in the throat while lying on your back. This makes your diaphragm and chest muscles work harder to open the obstructed airway and pull air into the lungs. As the obstruction persists, significant negative pressure is developed in the inner space of the thorax. Breathing usually resumes when the sympathetic nerve system is activated, regaining control over the throat muscles and reopening the airways – ending up with a loud gasp, snort, or body jerk. You may not sleep well, but you probably won’t be aware that this is happening.
Since the heart is “sitting” on the diaphragm on the lower end and attached to the aorta and pulmonary veins in the upper part, this negative intra-thoracic pressure is causing the heart muscle to stretch mechanically, potentially causing micro scarring of the left atrial tissue which in turn promotes arrhythmogenic characteristics (in particular Atrial Fibrillation).
Given that a moderate to severe sleep apnea patients experience Sleep Apnea events between 15 and 60 times an hour, or hundreds of times a night – the accumulated damage can be significant.
- Frequent interruptions to the sympathetic nerve system
As described above, most sleep apnea events end with a surge of the sympathetic branch of the autonomic nerve system. It is also known that the heart electrical system is tightly connected and to some degree regulated by this exact same system. Therefore, it is assumed that the load sleep apnea is imposing on the system has a profound impact on the promotion of irregular electrical activation patterns or arrhythmias.
- Drop in blood oxygen saturation leads to Oxidative Stress
There are serious negative consequences of repetitive oxygen desaturations, also known as Intermittent hypoxia, occurring during Sleep Apnea and affecting the entire cardiovascular system.
“The two most significant are the formation of reactive oxygen species (ROS) and inducing oxidative stress (OS). ROS can damage biomolecules, alter cellular functions and function as signaling molecules in physiological as well as in pathophysiological conditions. Consequently, they promote inflammation, endothelial dysfunction and cardiovascular morbidity. Oxidative stress is also a crucial component in obesity, sympathetic activation and metabolic disorders such as hypertension, dyslipidemia and type 2 diabetes/insulin resistance, which aggregate with OSAHS.”
Another correlation was found between the Apnea/Hypopnea Index (AHI) severity, and Lipid peroxidation, protein oxidation , and impaired endothelial function which is a pivotal factor in vascular pathogenesis, which is known to be adversely affected due to the promotion of oxidative stress and inflammation, resulting in educed Nitric Oxide, (NO) availability and thus diminishing its vital vascular repair capacity .
To learn more about how EPs and cardiologists address sleep apnea in their cardiac care pathway to improve outcomes, quality of life and reduce AFib recurrence – read the article “How Electrophysiologists Reduce AFib Recurrence by Addressing Sleep Apnea”
To learn more about how our Cardio-Sleep customized solutions can help you address Sleep Apnea as part of your cardiac care workflow, while maintaining a positive patient experience and minimal workload on your staff – contact us here.
Lavi et al, Molecular mechanisms of cardiovascular disease in OSAHS: the oxidative stress link. Eur Respir J 2009; 33: 1467–1484
. Hopps et al, Lipid peroxidation and protein oxidation are related to the severity of OSAS. European Review for Medical and Pharmacological Sciences 2014; 18: 3773-3778
Oxidative Stress, and Repair Capacity of the Vascular Endothelium in Obstructive Sleep Apnea
Jelic S, Padeletti M, Kawut SM, Christopher Higgins C, Canfield SM, Onat D, Paolo C. Colombo PC, Basner RC, Factor P, and LeJemtel TH. Circulation. 2008 April 29; 117(17): 2270–2278.
Scherbakov et al, Sleep-Disordered Breathing in Acute Ischemic Stroke: A Mechanistic Link to Peripheral Endothelial Dysfunction. J Am Heart Assoc. 2017 Sep 11;6(9). | <urn:uuid:2353fb23-f9a9-46e0-9b29-b885e67123f7> | CC-MAIN-2023-23 | https://www.itamar-medical.com/articles/the-potential-pathophysiological-mechanisms-of-obstructive-sleep-apnea-that-may-be-a-major-contributor-to-afib-disease-progression/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644574.15/warc/CC-MAIN-20230529010218-20230529040218-00298.warc.gz | en | 0.902553 | 964 | 2.84375 | 3 | {
"raw_score": 2.825094699859619,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
Makara Sankranti is one of the most celebrated festivals in India but astrologically, it is the day when Sun begins its movement away from the tropic of Capricorn and towards the northern hemisphere.
That is why we notice Sun rising and setting slightly towards North-East and North-West respecitvely between mid-january to mid-july in every year.
This period is called Uttarayan in vedic terms and it means uttar(north)+ayan(movement) of sun.
The remaining 6 months are termed as dakshinayan which means south movement of sun.
Originally, this was celebrated as Winter Solstice in ancient India but later due to the fact that the solstices are continually precessing at a rate of 50″ / year due to the precession of the equinoxes, winter solstice now occurs at 21st december each year.
Surya Siddhantha bridges this difference by juxtaposing the four solstitial and equinotial points with four of the twelve boundaries of the signs.
In Vedas, Uttarayan is termed as day of gods (devas in heaven) and Dakshinayan is their night. This is very much similar to the 6 months day at north pole after winter solstice and 6 months night from summer solstice.
Uttarayana as the period between the Vernal and Autumnal equinoxes (when there is Midnight Sun at the North Pole). Conversely, Dakshinaya is defined as the period between the Autumnal and Vernal Equinoxes, when there is midnight sun at the South Pole. This period is also referred to as Pitrayana (with the Pitrus (i.e. ancestors) being placed at the South Pole).
Usually when Uttarayana starts, it is a start of winter. When equinox slides it will increase ayanamsha and Makar Sankranti will also slide. In 1000 AD, Makar Sankranti was on Dec 31st and now it falls on January 14th; after 9000 years Makara Sankranti will be in June.
It would seem absurd to have Uttarayana in June when sun is about to begin its ascent upwards i.e Dakshinayana. This misconception continues as there is not much difference between actual Uttarayana date of Dec 21 and January 14. However, the difference will be significant as equinoxes slide further.
Uttarayana is the day when the day is shortest on the earth. According to srimad bhagavatam 5.21.7 “Duration of Days start to increase from Uttarayana commencement“. Even as per srimad mbhagwatam that day is december 22nd (5000 years ago when Bhagavatam was composed)
This year’s Makar (Capricorn), Sankranti (Sankraman = transmigration) occurs on 14th January 2014 at approximately 11 AM, IST (according to K.P Ayanamsa).
4.5 hours from this moment is called as Sankranti Punya Kaal Muhurta and Mahapunya Kaal Muhurta is only the initial 24 minutes from 11 AM.
This is the best time to take bath, offer water to Sun, sit alone and meditate on what has been bothering your mind and body since past 6 months.
This is also the best time to spend in a temple or at a peaceful place and recite Gayatri Mantra in silence.
In vedic religion, God doesn’t have any shape and for us, time is manifestation of God.
Vishnu or Narayana is worshipped through Sun God and this Sun’s equinoxes are celebrated as festivals.
Makara Sankranti Phalam for 2014
Sankranti Karna = Garaja
Sankranti Weekday = Tuesday
Sankranti Date = 14/01/2014
Sankranti Moment = 11:00 AM, IST (13:17 according to Drik Siddantham/Panchangam or N.C.Lahiri Ayanamsa)
Sankranti Ghati = 18 (Dinamana)
Sankranti Moonsign = Mithuna
Sankranti Nakshatra = Arudra (Daruna Sangyaka)
Vedic astrology personifies Sankranti. As per Vedic astrology Sankranti is 60 Yojana (approximately 432 Km) wide and long. Sankranti has figure of a man with one face, long nose, wide lips and nine arms. It moves in forwards direction but keeps watching backwards. It keeps revolving while holding a coconut shell in one hand.
As per Hindu beliefs, the above personification seems inauspicious and hence Sankranti window is prohibited for all auspicious activities. However, the Sankranti duration is considered highly significant for charity, penance and Shradh rituals. People offer alms to needy, take bath in holy rivers and perform Shradh for ancestors during Sankranti.
Vedic astrology also lists characteristic of each Sankranti based on Panchangam. These characteristics are omen of coming events in the month. Whatever items are influenced by Sankranti Purusha are believed to go through bad time.
Example : If Sankranti Purusha is adorning with gold then the coming month is not good for those who deal in gold commodity and so on.
This year, Sankranti Purusha is causing threat to thiefs, war threat to few countries, defamation of few celebrities and politicians, lesser productivity of milk in dairy industry and diseases to cattle, increase in gold rate during second half of year, threat to north-east countries in globe and states in that direction, draught, increase of all commidities prices.
From this Sankranti people will suffer from cough and cold, conflict among nations and chances of famine due to lack of rains.
Good time ahead for cruel, sinful, corrupted people and criminals.
Only small time theifs will be caught and punished.
In general, Sun in Capricorn brings good luck, wealth and health for people born in Moon Signs and Ascendants of Pisces, Leo & Scorpio.
It will be an average month for those born in Aries, Taurus, Virgo, Sagittarius.
This will be bad for other signs. | <urn:uuid:fc95bf24-3a02-4e78-840a-0392ac39c31d> | CC-MAIN-2014-10 | http://www.astrogle.com/astro-predictions/makar-sankranti-2014-astrological-significance-effects.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011117323/warc/CC-MAIN-20140305091837-00089-ip-10-183-142-35.ec2.internal.warc.gz | en | 0.944664 | 1,379 | 3.5 | 4 | {
"raw_score": 2.3373496532440186,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Religion |
Below is a list of helpful website links we've gathered for your convenience and our notes about each
- American Academy of Pediatrics (AAP)
This site provides information on carseat, airbag, and seatbelt safety, current immunization schedule, physician referrals, child care books and many position statements of the AAP on various topics.
- General Pediatric Topics (From American Academy of Pediatrics)
- Kids Health (Educational Health Site for Parents & Children)
- American Academy of Allergy, Asthma and Immunology
- Food Allerges
- Diabetes Information
- Center for Disease Control (CDC)
This site provides information on immunizations, lead poisoning, "Health Information from A to Z", traveler's health, and poisoning prevention.
- Consumer Product Safety Commission (CPSC)
This site provides information on product recalls, and has an interactive safety page for children.
- Environmental Protection Agency (EPA)
This site provides information on lead, drinking water, pesticides, asbestos and other topics affecting your child's health. There is also an interactive home page just for children.
- Food and Drug Administration (FDA)
This site provides information on immunizations, medications, and food safety, along with an interactive home page for children.
- Immunization Action Coalition (IAC)
This site allows user to download and print vaccine information statements (VIS), containing pertinent facts about immunizations.
- Injury Free Coalition for Kids of Chicago
This site offers many useful links to programs and hospitals.
- Kids In Danger
This non-profit organization is dedicated to protecting babies and young children from unsafe products.
- Children's Memorial Hospital
This site provides information about the hospital, maps to all satellite locations, activities for children, current research, and patient education classes.
- Evanston Hospital (ENH)
This site provides information about the hospital and patient education classes.
- Northwestern Memorial Hospital (NMH)
This site provides information about the hospital and physician referrals.
- National Network for Immunization Information
- Children’s Digestive Health and Nutrition Foundation
- Celiac Disease
- Consortium to Lower Obesity in Chicago Children
- Car Seat Information
- National Highway Safety Administration
- Poison Control
- Traveler’s Health
- National Institute of Mental Health
- Learning Disabilities
- Centers for Disease Control and Prevention
- Autism Speaks
- American Academy of Child and Adolescent Health
- Children & Adults with Attention Deficit/Hyperactivity Disorder
- Pediatric Dentistry
- Chicago Area Adoption Support
- National Association for Down Syndrome (Based in Wilmette, Il)
- National Down Syndrome Society
- Information regarding children’s needs in a divorce situation | <urn:uuid:8f8fe085-f65f-4065-b590-428e0c280037> | CC-MAIN-2015-48 | http://www.townandcountrypeds.com/resources/links.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464386.98/warc/CC-MAIN-20151124205424-00169-ip-10-71-132-137.ec2.internal.warc.gz | en | 0.760086 | 565 | 2.53125 | 3 | {
"raw_score": 1.42200767993927,
"reasoning_level": 1,
"interpretation": "Basic reasoning"
} | Health |
At the heart of American constitutional democracy is the concept of checks and balances: limits on the reach of each branch of government so that none can act unilaterally or exercise power without accountability.
The power to initiate warfare, because of its grave and potentially long-term consequences for the entire republic, is rightly assigned to the entire Congress under the American system, rather than to the President, a single individual. Wisely, the Constitution provides that the decision to go to war should be debated thoroughly and openly in Congress, rather than carried out by a secretive order, on the judgment of one person.
The 1973 War Powers Resolution, which sought to reassert Congressional control over executive warmaking, has failed since its inception to rein in the executive. The legislation was seriously flawed at the time, and has proved inadequate to deal with contemporary issues of conflict and the division of powers. Every President since 1973 has asserted that it is unconstitutional; Congress has been loath to challenge non-compliance. Despite numerous challenges to Presidential war making, many by the Center for Constitutional Rights, the courts have refused to adjudicate claims of violations.
In this context, the Bush administration has been able to greatly extend the practical usurpation of war powers by the executive, as part of an unprecedented expansion of executive power overall. Not only did the Bush administration overreach its authority by wresting control of war-making from Congress, but it did so on the basis of false information and, in some cases, authorizations from Congress for the limited use of military force were used as blanket authorizations for of all kinds of ongoing policies, programs and hostilities. In other cases, the executive acted secretly and unconstitutionally to carry out “military actions” of various magnitudes without accountability.
President Obama must pledge to help restore the balance of power and work with Congress to support a reform and revision of the War Powers Resolution. As a matter of constitutional integrity, all executive acts of war must be prohibited without Congressional authorization, and must comply with international law. President Obama must also end the wars launched, illegally, by the Bush administration.
The United States Constitution assigns to Congress the power to declare war, as well as the power to issue letters of marque and reprisal referring to hostilities short of full-scale war, and to control funding for the armed forces. The President, as commander in chief, is given the power to lead the armed forces. Since World War II the United States has not formally declared war; contemporary conflicts – including the wars in Korea and in Vietnam, and military actions against non-state forces – have not been declared wars. It is in this context that Congress and the President have battled over the question of authority for taking the country to war.
In 1973, after years of undeclared war in Vietnam, Congress passed the War Powers Resolution with the intention of restoring Congressional authority to decide when the United States should go to war or engage in military action that might lead to war. The resolution declares that “[T]he President, in every possible instance, shall consult with Congress” before introducing US Armed Forces into hostilities or imminent hostilities, and that the President must report within 48 hours any such introduction of forces. Once such a report is submitted, Congress has 60 days to authorize such use of United States forces or extend the time period, and if it does not do so, the President’s power to use those troops automatically terminates and they must be withdrawn. Although President Nixon vetoed the resolution, it was overridden by a two-thirds vote in both houses of Congress and became law.
Since 1973, however, US presidents have generally ignored the War Powers Resolution, and have argued that it is unconstitutional. Though presidents have submitted reports and requests for authorization of military force to Congress more than 100 times since then, covering everything from embassy evacuations to the Kosovo intervention, the executive branch has continued to insist that the authority of the commander in chief means that presidents are not bound by the War Powers Resolution. In fact, in only one case, (the 1975 Mayaguez seizure) did the executive acknowledge that it was acting pursuant to the War Powers Resolution, thus triggering the time limit, and in that case after the action was over and US forces withdrawn. On only one occasion has Congress exercised its authority to determine that the time requirements of the act would become operative and extended the time period through passage of the 1983 Multinational Force in Lebanon Resolution.
Both Democratic and Republican presidents have claimed the right to engage in wars without Congressional authorization. For example, in 1990, George H. W. Bush claimed that he could go to war in Iraq, and in 1999 Bill Clinton used force against Yugoslavia after the House had refused to specifically authorize hostilities. The 1999 war against Yugoslavia clearly violated the War Powers Resolution in that it lasted more than 60 days without Congressional authorization.
Both Bush administrations claimed, in 1991 and 2002 respectively, that even though Congress enacted resolutions authorizing force, they still exercise independent executive authority to continue and expand wars and are not bound by the actions of Congress. They articulated broad theories of presidential power, under which the President alone can use force in a broad array of circumstances. As President George H.W. Bush put it, “I didn’t have to get permission from some old goat in the United States Congress to kick Saddam Hussein out of Kuwait.”
In a great many instances, neither the President nor Congress, nor even the courts have been willing to trigger the War Powers Resolution mechanism. This is in part because the courts will not enforce the Resolution where Congress is either silent or acts ambiguously, even though the law clearly requires the troops to be withdrawn in such circumstances. In 1999, in the case of Yugoslavia, Congress voted not to authorize war, yet failed to pass legislation ordering the troops home and in fact funded the military action. Clearly, without reform of the legislation to address its weaknesses and without a concerted effort by a new executive in concert with Congress, the debate over war powers and responsibilities will remain paralyzed.
War Powers in the George W. Bush Years
In instances where Congress is too opposed, divided, conflicted or unsure to affirmatively authorize warfare, both the Constitution and the War Powers Resolution require that the United States not go to war. And yet, the Bush administration repeatedly forged ahead in defiance of the law, relying on an unconstitutional claim of executive power and the cynical political expectation that Congress would not want to be responsible for withholding support from American troops or ending a war once it was launched.
Post 9/11 Authorization for the Use of Military Force
In the immediate aftermath of September 11, 2001, President Bush and Congressional leaders negotiated legislation authorizing the President to take military steps to deal with the parties responsible for the attacks on the United States. The Authorization to Use Military Force (AUMF) was passed on September 14, 2001, giving the President powers to “use all necessary and appropriate force against those nations, organizations or persons he determines planned, authorized, committed or aided the terrorist attacks…or harbored such organizations or persons, in order to prevent any future acts of international terrorism….”
The resolution stated that it was intended to “constitute specific statutory authorization within the meaning of section 5(b) of the War Powers Resolution,” and that “[n]othing in this resolution supersedes any requirement of the War Powers Resolution.”
President Bush, for his part, asserted that the AUMF “recognized the authority of the President under the Constitution to take action to deter and prevent acts of terrorism against the United States.” He said, “In signing this resolution, I maintain the longstanding position of the executive branch regarding the President’s constitutional authority to use force, including the Armed Forces of the United States, and regarding the constitutionality of the War Powers Resolution.” In this way, both the President and Congress maintained their positions on the constitutionality of the War Powers Resolution and the responsibilities of the President under it, even as Congress found a way to support the President’s immediate response to the attacks.
The Bush administration, however, subsequently cited the 2001 AUMF as justification for virtually every “anti-terrorist” program it would carry out over the next 7 years thereby conjuring vast presidential war powers that Congress clearly never intended to grant. The president insisted that the AUMF not only authorized the invasion of Afghanistan, but also: the substitution of military commissions for courts to try prisoners in Guantánamo; the detention without any hearing of prisoners captured anywhere in the world and deemed enemy combatants; the warrantless wiretapping in the United States by the NSA; the preventive detention of US citizens and resident aliens captured not on any battlefield abroad but within the United States; and the rendition and torture of alleged suspected terrorists. The president also claimed that he had independent war powers that could not be curtailed by Congress and would allow him to, for example, torture prisoners even where Congress enacts legislation prohibiting such treatment.
In the summer of 2002, the Bush administration began publicly denouncing Iraq for its supposed possession of weapons of mass destruction, and suggesting that Iraq was allied with Al Qaeda’s terrorist network. The campaign was unmatched in recent history for its cynicism: in the absence of Central Intelligence Agency and Defense Intelligence Agency intelligence supporting such allegations, the White House manufactured its own evidence, as was later revealed by both news media and Defense Department reports.
The Bush administration also clearly ignored the United Nations Charter prohibiting wars that are not sanctioned by the UN Security Council or carried out in self-defense. And, when justifying the war, then Secretary of State Colin Powell made a patently false presentation to the Security Council about Saddam Hussein’s possession of weapons of mass destruction. Other administration officials presented a similarly false picture; for example, as the invasion began, Secretary of Defense Donald Rumsfeld claimed that US officials not only knew that Iraq had weapons of mass destruction, but also that “[w]e know where they are.”
The administration also lied to Congress. In October 2002, a few days before the Senate was to vote on the Joint Resolution to Authorize the Use of Armed Forces Against Iraq, about 75 Senators were told in closed session that Saddam Hussein had definitive means to attack the eastern seaboard of the US with biological or chemical weapons. Based on such misrepresentations, Congress voted to approve the initiation of war with Iraq.
The Constitution’s requirement that only Congress has the power to initiate war is designed to ensure an open, honest and public debate about whether to go to war. While Bush went to Congress to get authorization, the spirit of the Constitution was not complied with, because the executive did not inform Congress of the true facts, and Congress abdicated its responsibility to seriously attempt to determine what those true facts were.
Moreover, Bush again refused to concede any war power to Congress. While President Bush noted he had sought a “resolution of support” from Congress to use force against Iraq, and that he appreciated receiving that support, he also stated that: “my request for it did not, and my signing this resolution does not, constitute any change in the long-standing positions of the executive branch on either the president’s constitutional authority to use force to deter, prevent, or respond to aggression or other threats to US interests or on the constitutionality of the War Powers Resolution.”
The long and tragic history that followed leaves little room for doubt that the nation would have been better served by frank and open debate before the resolution authorizing the illegal attack on Iraq in violation of the UN Charter was approved by Congress. The war and occupation have cost not only billions of dollars, but cost hundreds of thousands of Iraqi, American, and allied lives, created violence in the region and spurred internal conflict in Iraq.
The responsibility of the Bush administration for this devastating military adventure is clear. Congress too bears responsibility for its own failures over the years, from the continuance of funding for the Iraq war in the face of military failure, gross human rights abuses and spiraling costs. But, the structural inadequacy of the existing War Powers Resolution as a brake on dangerous executive war-making is more evident than ever. In the Obama Administration, this must be corrected to protect America – and the world – against similar future disasters.
Summary and Policy Proposals
Congressional power to declare war has been usurped by the executive’s assertion of its exclusive decisionmaking power to engage in unchecked “military actions” of various magnitudes. The constitutional vision of the commander in chief – someone responsible for taking short, quick, defensive actions in emergency situations – has been superseded by the vision of presidents, most recently or egregiously George W. Bush, who claim sole authority to conduct protracted, offensive wars and large-scale military actions. Executive authority has been so distorted that an unconstitutional vision of war power as presidential prerogative has taken over. The wars launched illegally by the Bush administration must be brought to a close. And, the constitutional vision of Congress holding war powers must be realized though effective legislation, as an important democratic brake on executive adventurism.
Reform the War Powers Resolution
The War Powers Resolution has failed. Every president since the enactment of the Act has considered it to be unconstitutional. Presidents have generally not filed a report that would start the 60-day clock running, despite repeated executive introduction of armed forces into places like Indochina, Iran, Lebanon, Central America, Grenada, Libya, Bosnia, Haiti, Kosovo and Somalia, among others. Congress has usually not challenged this non-compliance. And, the judiciary has persistently refused to adjudicate claims challenging executive action as violating the War Powers Resolution, holding that members of Congress have no standing to seek relief, or that the claim presents non-justifiable political questions.
The War Powers Resolution, as written, was flawed in several key respects. The first flaw was that the Resolution imposed no operative, substantive limitations on the executive’s power to initiate warfare, but rather created a time limit of 60 days on the president’s use of troops in hostile situations without explicit congressional authorization. This approach was a mistake, because as a practical matter it recognized that the President could engage in unilateral war-making for up to 60 days, or 90 days with an extension.
But the Constitution requires that Congress provide authorization prior to initiating non-defensive war,not within a period of months after warfare is initiated. As history has demonstrated time and again, it is difficult to terminate warfare once hostilities have begun. The key time for Congress to weigh in is before hostilities are commenced, not 60 or 90 days afterward.
Secondly, the War Powers Resolution correctly recognized that even congressional silence, inaction or even implicit approval does not allow the president to engage in warfare – but it failed to provide an adequate enforcement mechanism if the president did so. Under the resolution, wars launched by the executive were supposed to be automatically terminated after 60 or 90 days if not affirmatively authorized by Congress – but this provision proved unenforceable. Presidents simply ignored it, Congress had an insufficient interest in enforcing it and the courts responded by effectually saying: if Congress did nothing, why should we?
Reforming the War Powers Resolution is a project that will require leadership from the President and the political will of Congress, working together in the service and preservation of the Constitution. In light of the abuses that have taken place under the Bush administration, it is the responsibility of a new administration to insist on transparency in the drafting of new legislation.
There is a long history of attempts to revise the War Powers Resolution. As new legislation is drafted, though, it will be important to focus on the central constitutional issues. Much time has been spent in debating how to address contingencies. It will be impossible to write into law any comprehensive formula for every conceivable situation, though; much more important will be establishing the fundamental principles of reform:
The War Powers Resolution should explicitly prohibit executive acts of war without previous Congressional authorization. The only exception should be the executive’s power in an emergency to use short-term force to repel sudden attacks on US territories, troops or citizens.
It is true that many potential conflict situations will be murky, complicated or divisive, and that quick congressional action may not always be forthcoming. Yet, history shows the folly of launching wars that are not supported by the American people. The United States should not use military force until a substantial consensus develops in Congress and the public that military force is necessary, appropriate and wise.
Today, as in 1787, the reality is that the interests of the people of the United States are best served if the Congress retains the power to declare war, and the President’s unilateral power to use American forces in combat should be reserved to repelling attacks on American troops or territories and evacuating citizens under actual attack. Repelling does not equate retaliation for an attack on an American city that occurred in the past, be it several days, weeks or months prior; it also does not mean launching a surprise invasion of a nation that has not attacked us. Repelling similarly does not permit the inflation of supposed threats against US citizens as justification to invade another country, as was the case in the Dominican Republic in 1965 and Grenada in 1983. The president can respond defensively to attacks that have been launched or are in the process of being launched, but not to rumors, reports, intuitions, or warnings of attacks.
Preventive war, disguised as preemptive war, has no place in constitutional or international law. To ensure that this principle is enforced, new legislation should prohibit the use of appropriated funds for any executive use of force that is unauthorized under the statute. Furthermore, the reformed War Powers Resolution must allow room for judicial oversight in the case of conflicts. A president who initiates hostilities in disregard of the statute would undoubtedly use appropriated funds to do so, forcing Congress to make the difficult decision of whether to authorize funds for troops engaged in combat. The statute should therefore state that a presidential violation of the act would create an impasse with Congress, and that separation of powers principles require the Court to decide the merits of any challenge brought against an alleged violation. And, a presidential violation of this principle should be explicitly made an impeachable offense.
End Abuses of Authorizations of Military Force
The past 8 years saw a period of lawless executive action in the area of war-making, marked by disregard for the Constitution, Congress, and the courts. The consequences for the democratic process and American security have been grave. President Obama must make plain his intention to conduct national security policy in full compliance with the law, and must demonstrate that America’s policies will not be carried out by deception.
The post 9/11 Authorization for the Use of Military Force was used by the Bush administration as justification for any and all acts the executive chose to engage in without the approval of, or in many cases, the knowledge of, Congress. White House lawyers claimed this AUMF allowed the President to engage in warrantless wiretapping, arbitrary detention, extraordinary rendition and numerous other illegal acts. The noxious principle of the all powerful “unitary executive” has no place in the Constitution.
President Obama must reject language in any Authorization of Military Force that gives over-broad powers to the executive, and must pro-actively inform Congress about the extent of any executive actions under an AUMF.
When a President receives an authorization from Congress for the use of military force, it cannot be taken as a blanket authorization of unchecked executive authority.
The United Nations Charter begins with a commitment “to save future generations from the scourge of war.” This cannot be accomplished in violation of the fundamental principles of international law. Hence, the amended War Powers Resolution must make strict compliance with international law an essential ingredient of US policy.
The last 8 years saw an expansion of executive power unprecedented in American history. The consequences for constitutional rights and our system of government are grave. But in no area have the consequences been more devastating than in the area of war-making. The cost in lives, human rights and long-term strategic interests is staggering. President Obama must not only work to heal the damage wrought by the Bush administration, but restore the constitutional principles of separation of powers and ensure that future conflicts will not be launched without checks and balances. | <urn:uuid:17b152a0-04c9-476a-af93-3172395c23a3> | CC-MAIN-2017-43 | https://michaelratner.com/writings/ccr-white-paper-amend-the-war-powers-act-pdf/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823153.58/warc/CC-MAIN-20171018214541-20171018234541-00683.warc.gz | en | 0.954059 | 4,157 | 3.140625 | 3 | {
"raw_score": 3.088102340698242,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Politics |
Grade Five Music Theory - Lesson 10: Describing Chords
A chord is a group of notes which sound at the same time.
Chords are usually made up of three basic notes, (but any of the notes can be doubled up without changing the nature of the chord).
To make chords, we first need to decide which key we are in. Let’s take the key of C major as an example:
Here's the C major scale.
To make a chord, we choose one of the notes of the scale and add another two notes above it. The note we start on is called the root. The notes we add are the third and the fifth. (See “Lesson 7: Intervals” for more about intervals). This gives us seven different chords:
Here are those chords in C major.
These chords are also known as triads. A triad is always made up of a root, a third above the root, and a fifth above the root.
Notice that the notes of these triads are either all on lines, or all in spaces.
Types of Triad
Triads/Chords can be major, minor, diminished or augmented.
Here are the chords in C major with their names:
In a minor key, the chords are built from the notes of the harmonic minor scale. This means you always have to raise the 7th degree of the scale by a semitone (half step).
Here are the chords in A minor with their names:
Major chords are made with a major third and a perfect fifth above the root.
Minor chords are made with a minor third and a perfect fifth above the root.
Diminished chords contain a minor third and a diminished fifth above the root.
Augmented chords contain a major third and an augmented fifth. You don't need to use any augmented chords in the Grade 5 Theory exam though!
We also use Roman numerals to name chords. The Roman numerals 1-7 are
I, II, III, IV, V VI and VII.
(Major chords are sometimes written with capital Roman numerals, whereas minor chords are written with small letters. You can write them all with capitals in your grade 5 theory exam.)
Here are the C major chords with their Roman numeral names:
No! In Grade 5 Theory, you only have to recognise chords I, II, IV and V.
In a major key, I, IV and V are major chords but ii is a minor chord.
In a minor key, i and iv are minor, ii is diminished, and V is major and includes an accidental (because of the raised 7th of the scale).
In all the chords we've looked at so far, the lowest note in the chord was the root.
When the root is the lowest note, the chord is in root position.
Chords can also be inverted (turned upside down).
When a chord is “inverted” the position of the notes is changed around so that the lowest note of the chord is the third or the fifth, rather than the root.
Here are some inversions of the first (I) chord in C major (which you will remember contains the notes C, E and G):
Lowest note is E (the third)
Lowest note is G (the fifth)
It doesn't matter what order the higher notes are in: inversions are defined by the lowest note of the chord. This note is also known as the bass note.
We use the letters a, b and c (written in lower case letters) to describe the lowest note of a chord.
When the chord is in root position (hasn’t been inverted), we use the letter a.
When the lowest note is the third, (e.g. E in C major), we use the letter b. This is also called first inversion.
When the lowest note is the fifth, (e.g. G in C major), we use the letter c. This is also called second inversion.
Here is chord I in C major, in its three possible positions:
Each figure has two numbers in it. The numbers refer to the intervals above the bass note (lowest note) of the chord.
The figures below are described relating to a C major chord.
is used for root position (a) chords. Above the bass note (C), there is a note a 3rd higher (E), and another which is a 5th higher (G).
is used for first inversion (b) chords. Above the bass note (E), there is a note a 3rd higher (G), and another which is a 6th higher (C).
is used for second inversion (c) chords. Above the bass note (G), there is a note a 4th higher (C) and another which is a 6th higher (E).
In the Grade 5 Theory exam, you might be asked to identify some chords within a piece of music. The question will tell you what key the music is in.
You need to
- Pick out which notes make up the chord
- Work out what the name of the chord is
- Work out what inversion the chord is in
That's a lot to do all in one go, so we'll break it down into steps!
Chords are often not as easy to spot as in our examples above. They can include a mix of notes of different lengths, a mix of instruments, different staves and even a combination of clefs.
Look at all the notes in the chord which are enclosed in the bracket.
There might be several notes, but there will only be 3 different note names. If you have an extract for more than one instrument, don’t forget to look in all the parts. You might also get a tied note from a previous bar with an accidental that is still relevant - look very carefully.
The following bar, (for cello and piano), is in F major: the chords you need to describe are in brackets, marked A and B:
Notice that chord A is split over two quaver (eighth note) beats, and both chords are split across all three staves.
Chord A has the notes A, C, and F
Chord B has the notes B flat, D and F.
Now you have picked out the notes of the chords, you are ready to name them.
We work out the chord name by finding the root position (a) chord.
The root position of the chord is where the three notes are as close together as possible. The three notes will be an interval of a third from each other.
In chord A (above), we have the notes A, C and F. The closest way of putting these together is F-A-C. (There is a third between F and A, and another third between A and C).
The first note from F-A-C is F, so this is a chord of F. The interval F-A is a major third, so it's an F major chord.
Remember that the extract is in F (major), so this is chord I.
In chord B (above), we have the notes Bb-D-F. This is the closest they can be: Bb to D is a third, and D to F is a third.
The first note from Bb-D-F is Bb, so this is a chord of Bb. Bb-D is a major third, so it's a Bb major chord.
Bb is the 4th note in the key of F major, so this is chord IV.
Finally we need to work out the inversion. You need to look at the lowest note of the chord.
If the lowest note is the same as the chord name itself, it will be "a", (root position)
if the lowest note is the third it will be "b", (first inversion)
and if it’s the fifth it will be "c", (second inversion).
Let’s look at our above examples.
Chord A’s lowest note is A. The chord is F major, so the lowest note is the third. So this chord is b. Its full name is Ib. It's a first inversion chord.
Chord B’s lowest note is B flat and it is a chord of B flat, so it’s in root position. This chord is a. Its full name is IVa.
- Don’t forget to check what key the extract is in - the instructions will tell you this information.
- Check back to see if the key signature or any accidentals affect the notes you are looking at.
- Make sure you include all the notes which are sounding on the beat which is marked. Sometimes the chord will include a note that started earlier in a bar but is still sounding. Here are two examples:
This chord (marked in brackets) is the third beat of the bar.
However, the lowest note is the left-hand B flat, which is played on the first beat but is still sounding.
This chord is the second beat of the bar.
Apart from the right-hand G and left-hand B flat, the chord also includes the right-hand E flat crotchet (quarter note), which is still sounding from the first beat of the bar. | <urn:uuid:db384179-9ccb-4094-a988-fd51582643cd> | CC-MAIN-2015-35 | http://www.mymusictheory.com/for-students/grade-5/56-10-describing-chords | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645323734.78/warc/CC-MAIN-20150827031523-00295-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.960562 | 1,971 | 4.5 | 4 | {
"raw_score": 1.6517621278762817,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Entertainment |
I recently tried to translate an in-class worksheet-based activity to a virtual activity and I thought I’d share how that went.
I’m currently participating in a CSAwesome Professional Development cohort and, as part of the PD, I worked with a few other teachers to teach a lesson to the other members of the cohort. I had ownership over adapting the part of a lesson centered around collaboratively completing this worksheet:
In the lesson plan, this activity is completed in-class with students working collaboratively to accomplish this task, with the final result submitted before the end of the period. Imagining how this task might fit in my own in-person classroom, I identified a few key components I would want to emphasize with this activity, which I also want to preserve when I try to translate it to a virtual activity:
- Formative for Me – by walking around and listening in with students, I get a feel for what my students understand and whether or not I need to reteach or enrich when needed.
- Formative for Students – since this is done in-class, I can provide immediate feedback to students on how well they understand the concepts needed to solve this task
- Collaboration – this is another opportunity early in the year to emphasize collaboration norms and the benefits of seeing different perspectives. This activity is particularly good for this, since there may be other ways to solve the maze that you might not see individually.
- Quickly Scored – because this is done in-class with me walking around, and because it’s collected all at once, it’s easy for me to quickly look over the results and determine what adjustments I need to make for the next lesson. I also imagine I wouldn’t ‘grade’ this assignment – maybe for participation, but definitely not for correctness.
Looking at the list above, the first thing I realized was that turning this worksheet into an asynchronous activity would not accomplish any of these goals. I wouldn’t get any formative data about how the lesson went in a timely manner, I couldn’t provide immediate feedback to help students learn, students couldn’t easily collaborate, and it’d be a pain to collect all the materials & score.
Instead, I wanted to preserve the “in-class, collaborative” feel to the activity while we met synchronously. I knew we’d be using a platform that had breakout rooms, so I knew I wanted to use that to my advantage. Here’s what I did:
First, I had to change how students engaged with the worksheet. Since this was no longer something they were completing with pencil-and-paper, I needed to find a digital way for folks to annotate & show their work on this document. I wanted to avoid ‘drawing’ or ‘annotating’ tools since drawing figures with a mouse doesn’t look great and distracts from the actual content of the worksheet. Instead, I separated “solving the maze” into two distinct tasks: deciding on a path through the maze, then filling in the values through the maze:
In the top box, students use the paint-bucket / background tool to highlight the boxes they plan to use for the path through their maze. Once they’ve decided their path, they fill-in the bottom empty maze with the numbers generated by following the path. Here’s an example from testing it out with other teachers:
The second thing I did was, rather than leaving this in a single Google Doc for students to make copies of, I put the worksheet into a single Google Slide presentation that everyone could edit (click through the slideshow below to see all the slides):
During the actual lesson, I told students they would be in breakout rooms with 1-2 other people. The room number they were assigned corresponded to the slide they would be working on (so Breakout Room 2 works on the slide labeled Group 2). Working together, they would solve the task by directly editing the slide. They wouldn’t need to share screens, since everyone has edit access and everyone is looking at the same document – when one person makes a change, everyone sees it.
Using a single Google Slideshow and having students work on this in breakout rooms let me do several things:
- Collaboration is easy and immediate – once students know which slide they need to be on, they can start working together right away.
- Since we’re all looking at the same slide document, I can navigate between slides and see what students are doing. This is the most direct analogy to “walking around looking over shoulders” that I’ve seen while working with students in Zoom.
- Similarly, because I can peer around on slides, its easy for me to quickly see if a group is veering off-track. If I notice that, I can pop into their breakout room and give them some immediate feedback so they can course-correct and keep working.
- Lastly, since all the work is housed in one place, it’s easy for me to collect and ‘score’. I’m not managing inconsistent documents from all my students; just this single file with everyone’s work.
In other words, this process still lets me hit all of my goals from before. It’s formative for me, since I can see student work on the slides. It’s formative for them, since I can quickly pop-in to a breakout room and provide guidance. It’s easily collaborative, since we’re all working on the same slideshow. And, it’s easy for me to ‘score’ since it’s a single slideshow with everyone’s work.
And – it had the added benefit of making a full-group shareout easier to manage. Students can also see each-others slides, so its easy for me to say “Let’s take a look at what group ___ did and hear from them”. The Math Teacher in me felt callbacks to the 5 Practices of Orchestrating Productive Mathematical Discussions – I felt myself falling into those same grooves of selecting and sequencing and connecting between the work students were showing in these slides.
So – that’s pretty much how I went about it. Looking at some of my other in-class tasks from this past year, I think I can adapt this process to those same assignments & tasks. And, if you happen to be in the same boat – trying to translate collaborative in-class activities into collaborative virtual activities – then maybe this process will work for you too. | <urn:uuid:d41677b0-1569-4cb9-8fae-79497a4968a4> | CC-MAIN-2021-43 | https://codeymccoderson.wordpress.com/2020/07/17/translating-an-in-class-activity-to-a-virtual-activity/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00669.warc.gz | en | 0.959796 | 1,391 | 2.96875 | 3 | {
"raw_score": 2.771390199661255,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Education & Jobs |
World Status of the English Language
English belongs to the Indo-European family like most languages spoken in Europe and northern India, as well as Afghanistan, Pakistan, and Iran.
Within the Indo-European family, English belongs to the GERMANIC LANGUAGE GROUP together with German, Dutch, Danish, Swedish, Norwegian, Faroese, Icelandic, etc. More particularly, English belongs to the Western Germanic language subgroup and bears particular affinity to Frisian, spoken in the Netherlands. It differs from the Germano-Dutch subgroup including German and its dialects, as well as Dutch. The northern Germanic language group includes Danish, Swedish, Norwegian (including Bokmål and Nynorsk), Faroese, and Icelandic.
|Eastern Germanic||Gothic (dead language since 14th century A.D.)|
(1) Anglo-frisian: English and Frisian
2.1 Low German: Northern German, Dutch and Flemish, Low Saxon, Afrikaans, etc.
2.2 High German
1) Middle German: Rhine Franconian (Lorraine, Palatine), Hessian, Moselle Franconian, Luxembourgish, Ripuarian, Thuringian, and Upper Saxon
2) Upper German: Standard German, South Bavarian, Swabian, Low Alemannic (Alsatian), High Alemannic, Upper Alemannic, East Franconian, Yiddish, and Pensilfaanisch
|Northern Germanic||Danish, Swedish, Norwegian (Bokmål and Nynorsk), Faroese, Icelandic|
Germanic languages share a common ancestor spoken by the Germanic peoples when they were concentrated in Northern Europe. This original language, called Common Germanic ( Urgermanisch in German), was used around 1000 A.D. We have no written texts in this language, which is to the Germanic languages what Latin is to the Romance languages. We know little about this donor language, but comparison of documented languages points to three linguistic subsets that arose through later fragmentation: Eastern Germanic (ostic languages), Western Germanic (westic languages), and Northern Germanic (nordic languages).
The table below briefly illustrates the similarities between certain Germanic languages:
While not all words in Germanic languages resemble each other, similarities are relatively normal. Still, many words differ from one Germanic language to the next, especially English due to its Latin and French influence.
English is the mother language of an estimated 341 million people and the second language of 508 million people in over sixty countries and states where it enjoys the status of official or co-official language, including American Samoa, Anguilla, Antigua and Barbuda, Australia, the Bahamas, Barbados, Belize, Bermuda, Botswana, the British Virgin Islands, Cameroon, Canada, the Cayman Islands, the Cook Islands, Dominica, the Falkland Islands, Fiji, Gambia, Ghana, Gibraltar, Grenada, Guam, Guyana, Independent State of Samoa, India, Ireland, Jamaica, Kenya, Kiribati, Lesotho, Liberia, Malawi, Malta, the Mariana Islands, the Marshall Islands, Mauritius, Micronesia, Montserrat, Namibia, Nauru, New Zealand, Nigeria, Niue, Norfolk Island, Pakistan, Palau, Papua New Guinea, Pitcairn Island, Puerto Rico, St. Kitts and Nevis, St. Lucia, St. Vincent and the Grenadines, the Seychelles, Sierra Leone, Singapore, the Solomon Islands, South Africa, Swaziland, Tanzania, Tokelau, Tonga, Trinidad and Tobago, the Turks and Caicos Islands, Uganda, the United Kingdom, the United States, the U.S. Virgin Islands, Vanuatu, Zambia, and Zimbabwe.
The following table lists the states and countries where English is the official or co-official language, along with their total population, which does not reflect the actual number of English speakers:
Population (in millions)
|Antigua and Barbuda||Africa||67,000||English|
|British Virgin Islands||Americas||20,000||English|
|Cameroon*||Africa||15,3 M||French, English|
|Canada*||Americas||29,6 M||French, English|
|Cayman Islands (U.K.)||Americas||39,000||English|
|Cook Islands (New Zealand)||Pacific||19,000||English|
|Fiji *||Pacific||796,000||English, Fijian|
|Hong-Kong* (China)||Asia||6,1 M||English, Cantonese|
|India*||Asia||1,000 M||Hindi, English|
|Ireland*||Europe||3,9 M||English, Irish|
|Kenya*||Africa||29,0 M||English, Swahili|
|Lesotho*||Africa||2,0 M||English, Sesotho|
|Marshall Islands (U.S.A.)||Pacific||60,000||English|
|Namibia*||Africa||2,1 M||English, Afrikaans|
|New Zealand *||Pacific||3,5 M||English, Maori|
|Niue Island (New Zealand)||Pacific||2,082||English|
|Norfolk Island (Australia)||Pacific||1,700||English|
|Northern Marianas (U.S.A.)||Pacific||84,000||English|
|Pakistan*||Asia||141,5 M||English, Urdu|
|Palau* (U.S.A.)||Pacific||19,000||English, Palauan|
|Papua New Guinea||Pacific||4,5 M||English|
|Philippines*||Asia||69,9 M||English, Tagalog|
|Puerto Rico* (U.S.A.)||Americas||3,9 M||Spanish, English|
|Rwanda*||Africa||1,3 M||Kinyarwanda, French, English|
|Seychelles*||Africa||79,000||English, French, Creole|
|Sierra Leone||Africa||4,7 M||English|
|Singapore*||Asia||3,4 M||English, Chinese, Malay, Tamil|
|South Africa *||Africa||39,3 M||Afrikaans, English|
|Tokelau (New Zealand)||Pacific||1,000||English|
|United Kingdom||Europe||58,2 M||English|
|United States||Americas||274,0 M||English|
|U.S. Virgin Islands||Americas||94,000||English|
|Vanuatu*||Pacific||191,000||English, French, Beach-la-Mar|
Countries where English is the majority mother tongue include the United States (76%), the United Kingdom (94.8%), Canada (59.3%), the Republic of Ireland (92.3%), Australia (95%), and New Zealand (91.4%). Together, these five countries form the foundation of English as a mother tongue in the world. They include approximately 306 million actual speakers and a potential of 374 million.
English is the mother tongue of a fairly insignificant portion of the population in all other countries except South Africa (5.7% or two million people). However, if we add the number of native English speakers in the countries listed above to those in India, Africa, and Oceania, the total increases from 306 million to 374 million. This is the number of English speakers (or anglophones) in the world, strictly speaking.
Below is a map of English-speaking countries in the world:
Source: Map reproduced with the kind permission of Mr. Mikael Parkvall
of Institutionen för lingvistik, University of Stockholm
English in Canada
English is the official language of Canadian federal bodies, together with French. According to the 2001 census, it is also the mother tongue of 59.3% of the population. Out of 29.6 million Canadians, 17.5 million are native English speakers, while 6.7 million are native French speakers (22.7%) and 5.2 million speak another mother tongue (17.6%). Representing nearly 60% of the population, anglophones are the linguistic majority in Canada.
Source: For more information, please see the Atlas of Canada at http://atlas.gc.ca/ | <urn:uuid:ce5a2975-3e23-41ee-8eda-f137730611f4> | CC-MAIN-2017-43 | https://slmc.uottawa.ca/?q=english_world_status | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822822.66/warc/CC-MAIN-20171018070528-20171018090528-00228.warc.gz | en | 0.710816 | 1,846 | 3.875 | 4 | {
"raw_score": 2.3685994148254395,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Literature |
Sheet of Assertion
The System of Existential Graphs is a certain class of diagrams upon which it is permitted to operate certain transformations.
There is required a certain surface upon which it is practicable to scribe the diagrams and from which they can be erased in whole or in part.
The whole of this surface except certain parts which may be severed from it by “cuts” is termed the sheet of assertion.
It is agreed that a certain sheet, or blackboard, shall, under the name of The Sheet of Assertion, be considered as representing the universe of discourse, and as asserting whatever is taken for granted between the graphist and the interpreter to be true of that universe. The sheet of assertion is, therefore, a graph.
The sheet on which the graphs are written (called the sheet of assertion), as well as each portion of it, is a graph asserting that a recognized universe is definite (so that no assertion can be both true and false of it), individual (so that any assertion is either true or false of it), and real (so that what is true and what false of it is independent of any judgment of man or men, unless it be that of the creator of the universe, in case this is fictive); and any graph written upon this sheet is thereby asserted of that universe; and any multitude of graphs written disconnectedly upon the sheet are all asserted of the universe.
What we have to do […] is to form a perfectly consistent method of expressing any assertion diagrammatically. The diagram must then evidently be something that we can see and contemplate. Now what we see appears spread out as upon a sheet. Consequently our diagram must be drawn upon a sheet. We must appropriate a sheet to the purpose, and the diagram drawn or written on the sheet is to express an assertion. We can, then, approximately call this sheet our sheet of assertion.
A certain sheet, called the sheet of assertion, is appropriated to the drawing upon it of such graphs that whatever may be at any time drawn upon it, called the entire graph, shall be regarded as expressing an assertion by an imaginary person, called the graphist, concerning a universe, perfectly definite and entirely determinate, but the arbitrary creation of an imaginary mind, called the grapheus.
The matter which the Graph-instances are to determine, and which thereby becomes the Quasi-mind in which the Graphist and Interpreter are at one, being a Seme of The Truth, that is, of the widest Universe of Reality, and at the same time, a Pheme of all that is tacitly taken for granted between the Graphist and Interpreter, from the outset of their discussion, shall be a sheet, called the Phemic Sheet, upon which signs can be scribed and from which any that are already scribed in any manner (even though they be incised) can be erased. | <urn:uuid:c86a30de-326d-4f31-adef-182d5ddee81a> | CC-MAIN-2021-17 | http://commens.org/dictionary/term/sheet-of-assertion | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067400.24/warc/CC-MAIN-20210412113508-20210412143508-00400.warc.gz | en | 0.964611 | 598 | 2.90625 | 3 | {
"raw_score": 2.740152597427368,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Literature |
Hispanics and Health Care in the United States
III. Utilization of a Usual Health Care Provider and Satisfaction with Health Care
According to the survey results, more than one in four Latinos (27 percent) lack a regular health care provider.* Latinos are a diverse population, and a variety of factors need to be considered to understand why some have regular providers and some don’t. Immigration and assimilation are factors, as large shares of Latinos born outside of the United States and those who speak little English lack regular health care. Socioeconomic factors, such as education, immigration and language, weigh heavily in creating these disparities. However, there is also a substantial share of U.S.-born, fully assimilated Latinos in the ranks of those with no usual health care provider.
Hispanics who are most likely to lack a usual place for health care include men (36 percent), the young (37 percent of those ages 18–29), and the less educated (32 percent of those lacking a high school diploma). Generally, Latinos who are less assimilated into U.S. life are also at a disadvantage: 30 percent of those born outside of the 50 states, 32 percent of Spanish speakers and 43 percent of immigrants who are neither citizens nor legal permanent residents lack a regular health care provider.
The uninsured are more than twice as likely (42 percent) as the insured (19 percent) to lack a usual provider. Although lacking health insurance raises the likelihood of not having a usual health care provider, having health insurance in no way guarantees it. Of those without a usual source of health care, 45 percent have health insurance.
Finally, even though the poorly educated and less assimilated are less likely to have a regular health care provider, they comprise only a portion of the population that falls into this category. A sizeable proportion of those with no usual place for health care have at least a high school diploma (50 percent), are native born (30 percent), are proficient in English (52 percent) or are U.S. citizens (50 percent).
Importance of Having a Usual Health Care Provider
Usual Health Care Provider
Respondents are considered to have a “usual” or “regular” health care provider or place to receive health care if they:
1. Report that they have a place where they usually go to when they are sick or need advice about their health, and
2. This usual place is not a hospital emergency room
Access to health care can be defined in any number of ways, but one widely used approach is to consider whether a person reports having a usual place to seek health care and advice. As is common practice,13 we consider any respondents who report having a place, other than an emergency room, “where they usually go when they are sick or need advice about their health,” other than an emergency room, as having a regular health care provider. We consider those who report having no usual place to obtain health care, or whose only usual place for health care is an emergency room, to be lacking a health care provider.
Defined this way, having a usual provider correlates with preventive care and monitoring. And preventive care and monitoring are both associated with better long-term health outcomes, including better control of chronic conditions. Among Hispanics with a regular health care provider, 86 percent report a blood pressure check in the past two years, while only 62 percent of those lacking a provider report this. While almost three-fourths of those with a usual place to get health care report having their cholesterol checked in the past five years, fewer than half (44 percent) of those with no usual place have done so. Latinos generally are at heightened risk of diabetes, and three-fourths of those with a regular health care provider report having had a blood test to check this in the past five years, compared with only 49 percent of those lacking a regular health care provider. Among already-diagnosed diabetics, it is especially noteworthy that, while 10 percent of those with a regular place for health care have not had a test to check their blood sugar in the past two years, this share jumps to 33 percent among those with no regular provider.
The Likelihood of Having a Usual Health Care Provider
Our survey results find that 73 percent of respondents have a usual health care provider and that 27 percent of respondents lack a provider.
Nativity and assimilation are both linked to the likelihood of having a regular health care provider.
The lack of a regular health care provider varies markedly within the Latino population. For gender, age and education, the patterns mimic those in the general population.1 Latino men (36 percent) are more likely to lack a regular health care provider than women (17 percent). Younger Hispanics are especially likely to lack a regular health care provider: 37 percent of those ages 18–29 do not have one. This statistic declines with age; among respondents ages 65 and older, only 13 percent lack a regular health care provider. Higher levels of education are clearly associated with a higher likelihood of having a usual place to obtain health care. Only 19 percent of Hispanics with at least some college education lack usual health care access. That rises to 27 percent for high school graduates, and to nearly one-third (32 percent) for those with less than a high school diploma.
Place of birth and assimilation also play a role in the likelihood of having a regular health care provider. While 22 percent of U.S.-born Latinos do not have a place where they usually go for medical care, this share increases to 30 percent among those born outside the 50 states. In general, less assimilated Hispanics are those most at risk of lacking a usual place for health care. Among naturalized and native-born Hispanic citizens, 21 to 22 percent lack a usual health care provider. That compares with 31 percent of legal permanent residents and 43 percent of immigrants who are neither citizens nor legal permanent residents. Among all Latino immigrants, about half of recent arrivals—those in the country for less than five years—lack a usual place for health care, compared with 21 percent of those who have lived in the United States for at least 15 years. Hispanics who are predominantly Spanish speakers are much more likely to lack regular health care than their predominantly English-speaking counterparts (32 percent versus 22 percent).
Having health insurance is an important factor associated with having a usual place to obtain health care. While 42 percent of the uninsured lack a health care provider, only 19 percent of the insured do not have one.
Getting Care Outside of the U.S.
About one in 12 Hispanics (8 percent) in the U.S. have obtained medical care, treatment or drugs in Latin America during the previous year, and one in six (17 percent) knows a family member or friend who has done so.
Latinos who describe their recent medical care in the United States as only fair to poor are somewhat more likely to get medical services outside the country—11 percent have, compared with 6 percent of those who describe their care in this country as excellent. Hispanics without health insurance also are more likely to have received care in another country. Of those without insurance, 11 percent did; of those with insurance, 7 percent did. Of Latinos with a regular provider in the U.S. medical system, 8 percent say they have gotten care abroad, compared with 10 percent of those with no regular provider.
Hispanics ages 65 and older are the least likely to seek care outside the United States (4 percent) and those ages 50–64 are the most likely (9 percent). Foreign-born Latinos are somewhat more likely (9 percent) than the native born (6 percent) to get medical care in Latin America, and those from Mexico (10 percent) are more likely than non-Mexicans overall. A higher share of bilingual (10 percent) and Spanish-dominant (9 percent) Hispanics seek medical care in Latin America than do English speakers (4 percent).
One in 10 people with at least some college education report getting recent treatment or drugs in Latin America, compared with single-digit percentages for those with less education.
Profile of Latinos Lacking a Usual Health Care Provider
Who are the Hispanics who are not being reached by the health care system? This section looks at the characteristics of people who lack a usual health care provider.
Most Hispanics who lack a provider are male (69 percent). The population also tends to be young: 41 percent are 18–29 years of age, and 43 percent are 30–49. As is expected, Hispanics with low educational attainment comprise a large proportion of those lacking a provider; 47 percent report having less than a high school diploma. The vast majority of those with no usual place for health care are of Mexican origin (69 percent), and an additional 11 percent are of Central American origin.
Of those Hispanics who have no usual place for health care, 45 percent have health insurance.
Yet, what is also notable about those lacking a usual health care provider is the prevalence of Latinos whose characteristics suggest assimilation. While most Latinos who lack a provider are foreign born (70 percent), a full 30 percent were born in the 50 states. Half of those lacking a usual place for health care are citizens. A sizeable minority of immigrants who lack regular health care (45 percent) have lived in the United States for fewer than 10 years, but the majority (52 percent) have lived in the United States for 10 years or more. On a similar note, a slight majority of those with no usual health care provider is English-dominant or bilingual (52 percent).
Finally, 45 percent of Hispanics who have no usual place for health care say they have health insurance. So though health insurance is correlated with usual care, it does not guarantee it.
Why Don’t People Have a Usual Place for Health Care?
The survey asked respondents who lacked a usual place to get medical care or advice why they did not have one.* By far the most commonly cited reason was that they felt they did not need one because they are seldom sick (41 percent). An additional 13 percent report that they prefer to treat themselves than to seek help from medical doctors.
The next most prevalent set of responses relates to finances: 17 percent report that they lack health insurance, and 11 percent report that the cost of health care prevents them from having a regular health care provider.
About 3 percent of Hispanics respond that difficulties navigating the health care system are to blame for their lack of a regular provider: 2 percent report that they do not know where to get regular health care, and about 1 percent reports that they were unable to find a provider who spoke their language.
Finally, 3 percent say they prefer to go to a number of different health care providers, not just to one place, and 4 percent say they have just moved to the area, so presumably have yet to establish a relationship with a provider.
An overwhelming majority of Latinos believe that sick people should obtain treatment only from medical professionals, but a small minority say they seek health care from folk healers. Those who receive care from folk healers are slightly more likely to be U.S.-born than foreign born and to speak mainly English, not Spanish.
Asked whether they obtain care from a curandero, shaman or someone else with special powers to heal the sick, 6 percent of Hispanics say they do and 10 percent report that someone in their household receives such care.
About one in 12 Hispanics born in the 50 states use folk medicine, compared with one in 20 of those born in other countries or Puerto Rico. Similarly, one in 12 English-dominant Hispanics use folk medicine, as do one in 20 Spanish-dominant Latinos. Hispanics of Cuban ancestry (11 percent) are more likely to obtain such care than other Latino groups. Hispanics without health insurance or a usual place for care are no more likely to seek folk care than those with health insurance or a usual place for care.
Most Hispanics (87 percent) say that sick people should seek care only from medical professionals; only 8 percent say there is a role for folk medicine. Opinions about this echo usage patterns to some extent. Hispanics who speak English (14 percent), as well as those born in the United States (12 percent), are the most likely to say there is a role for potions and folk healing. So are younger Hispanics, as well as those with at least some college education.
Quality of Health Care
While visiting a health care provider is important, the perceived quality of care received during health care visits is equally important. To assess the perceived quality of care, respondents who received any medical care in the past year were asked to rate that care as “excellent”, “good”, “fair”, or “poor.”
More than three-quarters of Hispanics who have had medical care within the past year rate it as good to excellent: 32 percent say it was excellent, and 46 percent say it was good. At the other extreme, 17 percent say their care was only fair, and 4 percent report poor care.
In general, more educated Latinos, and those who have access to the medical system, give better evaluations of the quality of their medical care than do Latinos with lower education levels, no insurance or no regular source of care.
Women are more likely than men to say their recent medical care was good or excellent, 80 percent to 74 percent. Eighty-one percent of the college-educated report being satisfied with their care, as compared with 75 percent of people lacking a high school diploma.
Having health insurance or a usual health care provider is associated with better perceived quality of care.
Among Hispanics with health insurance, 80 percent rate their care as good to excellent; among the uninsured, 70 percent do. Similarly, 80 percent of Latinos who have a usual health care provider rate their care as good to excellent, compared with 64 percent who have no usual provider. Among those with a usual provider, Hispanics who usually get care in doctors’ offices give higher ratings than those who go to medical clinics. Fully four in 10 who go to a doctor’s office rate their care as excellent, compared with 27 percent of those who get care from a clinic.
Generally, nativity and assimilation are not strongly associated with perceived quality of care. However, a mismatch between a Hispanic’s primary language and the language spoken at his or her appointment lowered the satisfaction ratings somewhat. For example, 30 percent of Spanish speakers whose appointments usually are conducted in English rate their care fair to poor, compared with 19 percent of those whose appointments are in Spanish.
Reasons for Poor Treatment
Respondents were also queried as to whether they had received poor service at the hands of a health care professional in the past five years. Those 23 percent who said they had received poor treatment were asked about four potential reasons. The largest share of Hispanics (31 percent) cited their inability to pay as the reason for poor treatment, followed by their race or ethnicity (29 percent), their accent or how they speak English (23 percent) and their medical history (20 percent).
Respondents who lacked health insurance, or a usual health care provider, were especially likely to claim that their inability to pay, their race, or their language skills contributed to their poor treatment. Forty-one percent of Hispanics with no usual place for health care, and 53 percent of Hispanics with no health insurance, reported that their inability to pay contributed to poor treatment. In comparison, 27 percent of Latinos with a usual provider reported as much, as did 20 percent of Latinos with health insurance. Thirty-eight percent of Latinos with no usual provider and 34 percent of those with no health insurance reported that their race contributed to poor treatment by medical professionals, as compared to 25 percent of those with a usual provider and 26 percent of those with health insurance. Thirty-two percent of Latinos who lacked either health insurance or a usual provider reported that their accent or poor English skills led to poor treatment, while 20 percent of the insured and those with a usual provider reported as much.
Other groups more likely than Hispanics overall to cite a lack of money as a reason for poor treatment include immigrants who aren’t citizens or legal permanent residents (45 percent), Spanish speakers (38 percent) and Latinos who did not graduate from high school (41 percent).
Among the groups that are more likely than Hispanics overall to cite race as a reason they were treated poorly are Spanish speakers (36 percent), and noncitizens (38 percent of legal permanent residents and 35 percent of immigrants who are not citizens or legal permanent residents).
Among the groups most likely to cite language as the reason they received poor care are Hispanics with less than a high school education (37 percent), immigrants (33 percent) and those who mainly speak Spanish (43 percent).
Medical history is given as a reason for poor care by a somewhat higher share of older Hispanics (25 percent) and those whose primary language is Spanish (25 percent).
- Pleis JR, and Lethbridge-Cejku M. “Summary Health Statistics for U.S. Adults: National Health Interview Survey, 2006.” National Center for Health Statistics Vital and Health Statistics Series 10:235, 2007. ↩ | <urn:uuid:7f5dfe71-0060-4e4c-9b11-53da45021912> | CC-MAIN-2016-26 | http://www.pewhispanic.org/2008/08/13/iii-utilization-of-a-usual-health-care-provider-and-satisfaction-with-health-care/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00063-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.9751 | 3,528 | 3.3125 | 3 | {
"raw_score": 2.9431710243225098,
"reasoning_level": 3,
"interpretation": "Strong reasoning"
} | Health |
Three cities loom large in the life and death of John F. Kennedy: Washington, D.C., where he was president and senator; Dallas, where he died; and Boston, where he was born.
With the 50th anniversary of his Nov. 22, 1963 assassination at hand, all three offer places where you can learn more about him or honor his legacy. Here's a list of museums, monuments, historic sites and events in those cities and a few others around the country. (Note several sites are affected by the federal government shutdown.)
-Tour: A walking tour of downtown Boston looks at JFK as an emerging politician in the context of his Irish immigrant ancestors and family political connections, with stops at the JFK statue on the Boston State House lawn; the Union Oyster House, where he often dined in an upstairs booth; the Parker House hotel, where he proposed to Jacqueline Bouvier, and Faneuil Hall, where he gave his last speech in the 1960 campaign. The $12 tour meets Wednesday-Saturday, 11:30 a.m., Boston Common Visitor Center, 139 Tremont St., http://www.kennedytour.com .
-Presidential Library and Museum: The I.M. Pei-designed museum houses permanent displays on the campaign trail, Kennedy's family and the first lady, along with special exhibits on the Cuban missile crisis and Jackie's White House years, http://www.jfklibrary.org/ (temporarily closed by shutdown).
-Birthplace: Kennedy, one of nine children, was born at 83 Beals St., in Brookline, a Boston suburb, in 1917. The house is a National Park site, http://www.nps.gov/jofi (temporarily closed by shutdown).
-Hyannis: In the 1920s, JFK's father Joseph bought a waterfront vacation home for his family in Hyannis Port on Cape Cod, about 75 miles (120 kilometers) from Boston. Other family members including JFK bought property nearby. A seasonal cruise operates through Oct. 27 offering views of the Kennedy Compound from the water, http://www.hylineharborcruise.com . The privately operated JFK Hyannis Museum, open through November, has an exhibit on his last visits to the Cape, http://jfkhyannismuseum.org .
-Sixth Floor Museum at Dealey Plaza: Kennedy's assassin Lee Harvey Oswald fired at the president's motorcade from a window on the sixth floor of the Texas School Book Depository. The site is now the Sixth Floor Museum. The privately operated museum has exhibits about the assassination and is hosting a series of talks by individuals connected to the events of that day, including authors of several new books; 411 Elm St., http://www.jfk.org .
-Memorial ceremony: On Nov. 22, church bells | <urn:uuid:19b9cfa5-075f-4651-bf84-161741f377a4> | CC-MAIN-2014-15 | http://www.financialexpress.com/news/where-to-find-jfk-history-50-years-after-his-death/1181780 | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00316-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.96373 | 593 | 2.5625 | 3 | {
"raw_score": 1.9119760990142822,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | History |
Carb counting is a way of better understanding how carbohydrates affect your blood sugar, medication requirement and insulin requirement.
For people with type 1 diabetes and those with type 2 diabetes who require insulin, carbohydrate counting is a way of matching insulin requirements with the amount of carbohydrate that you eat or drink.
For people with type 2 diabetes who don’t require insulin, carbohydrate counting is a way of regulating the amount of carbohydrate you consume and monitoring how this affects your blood glucose control, weight management and medication intake.
Carbohydrate counting requires patience and diligence. Learning it successfully means understanding carbohydrates, learning how to adjust your insulin or medication accordingly, and measure yor blood glucose levels regularly for clarity.
What are carbohydrates?
Every carbohydrate we eat is converted into glucose and has an impact on blood sugar levels.
Carbohydrates are commonly found within the following foods:
- Grains (breads, pasta, cereals)
- Root crops (potatoes, sweet potatoes, and yams)
- Most alcoholic drinks (Beer, cider, lager, most cocktails)
- Desserts and sweets
- Most dairy products, except cheese,
- Sugars including sucrose, fructose, dextrose, maltose
How should I count carbohydrates?
Most people count carbohydrates using grams, with one serving equal to 15 grams of carbohydrate.
Most foods are only partially carbohydrate (although some foods are entirely carbohydrate), but the effect of 15 grams carbohydrate will be the same whether it is from bread, biscuits or other foods.
To ascertain the carbohydrate content of these foods, it is necessary to use food labels, reference books or computer programs, and a scale and list of carbohydrates.
There are two methods of counting carbohydrates: basic carb counting and consistent carb counting. Both ways involve calculating the total carbohydrate of a food, knowing how many carbs you can eat, and then matching this up with the portion size and any medication you take.
Basic carb counting
Basic carb counting can help you learn how certain foods affect your blood glucose levels, and the aim is to eat a consistent amount of carbs each day. This is most likely to be adopted by people with non-insulin treated type 2 diabetes.
A dietitian can advise you on how much carbohydrate you should eat at each meal based on your medication, weight goals and overall diabetes control. In the interim, you could ask your doctor for an appropriate amount of carbs (e.g. 45-60 grams per day) to eat at each meal before your meeting with a dietitian.
Consistent carb counting
Consistent carb counting, also known as advanced carb counting, can be used by people with diabetes who are treated with rapid-acting insulin. To count carbs, you will use an insulin-to-carb ratio, which calculates how much insulin you need to cover the carbohydrate in your meal.
A commonly used ratio is one unit of rapid-acting insulin per 10g of carbohydrate, or 1:10. This can vary from person to person, and your ratio might end up being 1:15, 2:10, or something else. So, if your ratio was 1:15, eating 45g of carbohydrate with a meal would require you to inject three units of insulin.
You can also have different ratios for different times of day.
Carb counting example
John has the following insulin to carb ratios.
- Breakfast:2 : 10 (2 units of rapid acting insulin per 10g of carbs) [space either side of colon because I’ll introduce a decimal point next which might cause confusion in too cramped a ratio]
- Lunch: 5 : 10 (1.5 unit per 10g carbs)
- Dinner: 1: 10 (1 unit per 10g carbs)
Your health care team should help you assess your own insulin-to-carb ratio. Make sure you log your blood glucose levels to see how your ratio affects your readings. It may be you need to inject correction doses of insulin in case your blood sugar climbs too high. You should discuss correction doses with your doctor.
Counting carbohydrates can take a while to become competent, and for some time it will be necessary to weigh and measure foods.
What carbohydrate counting equipment do I need?
Many people with diabetes have scales, as well as weighing and measuring equipment to measure volume. Mostly, food labels give both weight and volume measurements, but some do not.
The following techniques can help in understanding carbohydrate counting:
- Use of food labels, scales and a calculator make it possible to identify carbohydrate content in food.
- Using a scale is useful for measuring carbs in a range of different foods from fruit and vegetables to rice and cereal. Refer to the food packaging or you can get carb counts from nutrition books or the Internet.
- Take your time with carb counting as it is easy to make mistakes if maths is not a strength or if you’re rushing.
- Be aware that some foods have different carb counts depending on whether the food is cooked or uncooked. This can sometimes make a big difference so be careful with this.
- Nutrition books and online resources can provide useful information and a quick and easy way to look up brand-name food information. Many recipe books include detailed carbohydrate information.
- The Carbs and Cals book (or app) is a very popular book for helping with carb counting as it provides images of a range of foods and serving sizes along with the associated carb counts.
How do I understand more about carbohydrate counting?
The best way to learn carbohydrate counting is to take part in a carbohydrate counting course.
If you are on insulin, would like to go on a carbohydrate counting course and have not been on one of these courses in recent years, your GP, diabetes consultant or diabetes specialist nurse can refer you onto one of these courses.
Your diabetes health team should also be able to arrange one-to-one guidance on carbohydrate counting if you need help at any time.
The Low Carb Program is an online education program launched by Diabetes.co.uk that explains the impact of carbohydrates on blood glucose levels.
What the community have to say about carbohydrate counting
- Carbsrok: You need to be counting the carbohydrate in your food. Please go back to your Diabetes team and say: ‘look, I am completely confused. I need help sorting this.’ Take pen and paper with you so you can write things down. Ask for guidance on Carbohydrates. And how to match your insulin to food intake. A low GI/ Glycaemic load need to be considered. Take one day at a time otherwise you will feel completely overwhelmed by it all.
- Copepod: Most food packets have carbohydrate content in the nutritional information – you need to count total carbohydrate, not just the sugar, and also bear in mind some foods have different values for raw & cooked food. Having said that, once I’ve weighed a food once, I just estimate by sight after that, which is useful when eating away from home. For fruit and vegetables, you’ll need a guide, either book or online.
- Gazhay:I would highly encourage every diabetic to go on DAFNE course, even the ‘carb counting haters’. As it can be a very individual thing, and surely getting all the education about it is a good thing, whether you choose to continue it or not.
- Wallycorker: Over the last 5 months after attending sessions on carbohydrate management she had attained a magnificent HbA1c of 5.6 – down from readings near to – or in – double figures. What’s more by following the techniques explained to her she had lost a massive five stones in weight in the same period of time. Yes five stones in five months – I am certain that is what she said – just through carbohydrate management or carb counting as it is sometimes known.
- Hellsbells: I was diagnosed with T2 almost 2 years ago. I waited 5 months to see dietician who advised me to eat carbohydrate based meals. In fact, when I told her I was carb counting as a way of controlling my bg levels she told me my medication (metformin) would not work if I didn’t eat plenty of carbs! I also asked about portion control. She replied that she would discuss this with me at our next meeting which would be in 3 months time. Needless to say, I didn’t go back. | <urn:uuid:8ceb2272-5de3-4401-9743-27db126ac10f> | CC-MAIN-2020-29 | http://caloriesless.com/carb-counting-what-is-carb-counting-and-how-to-count-carbs/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657134758.80/warc/CC-MAIN-20200712082512-20200712112512-00499.warc.gz | en | 0.935193 | 1,744 | 3.734375 | 4 | {
"raw_score": 2.10158634185791,
"reasoning_level": 2,
"interpretation": "Moderate reasoning"
} | Health |
Subsets and Splits