text
stringlengths
182
626k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
379
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
49
202k
score
float64
2.52
5.34
int_score
int64
3
5
Today is Earth Day. It's a chance to reflect on how everyday actions impact the environment. Earth Day also is a welcome reminder that small behaviors make a big difference. Recycling is one of the easiest things you can do, especially with the recycling drop-box program offered by the Ottawa-Sandusky-Seneca Solid Waste District and serviced by Rumpke. Visit www.rumpkerecycling.com or www.recycleoss.org for a map of convenient recycling locations near you. The tips below can help you do even more. Top five ways you can help the recycling program: 1. No plastic bags. Recyclables should be placed unbagged in the container. If you use plastic bags to collect or transport recyclables, empty the recyclables at the recycling site and take the bags home to reuse. Or, consider taking plastic bags back to the grocery store for recycling. 2. Plenty of paper. Remember to recycle computer paper, magazines, newspapers and inserts, envelopes with or without windows, postcards and junk mail. 3. Recycle right. Each recycling drop-box is stickered with a list of acceptable and unacceptable items. The long list includes many items found in your home. Recycling the right items helps the program run smoother and keeps costs low. 4. Flatten cardboard. Do you shop online? If so, you probably have a bunch of cardboard boxes, which are great to recycle. Help save space in the container by flattening boxes. 5. Summer parties. Are you hosting a party soon? Memorial Day? Graduation? Fourth of July? Place a recycling container next to each trash can to make it easy for your guests to recycle glass and plastic bottles, along with beer and soda cans. Rumpke is proud to support recycling throughout the community. Let's all make the most of the opportunity to recycle more and trash less. Happy Earth Day! and recycling education
<urn:uuid:cf11d069-3b48-4dad-8e11-dc1f602405d0>
CC-MAIN-2016-26
http://advertiser-tribune.com/page/content.detail/id/565354/Five-ways-to-help-recycling-program.html?nav=5008
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.893694
407
2.828125
3
What is neck pain? Because of its location and range-of-motion, your neck is often left unprotected and subject to injury. Neck pain can range from mild discomfort to disabling, chronic pain. What causes neck pain? Neck pain can result from many different causes--from injury, to age-related disorders, or inflammatory disease. Causes of neck pain and problems may include the following: - Injury (damage to the muscles, tendons, and/or ligaments) - Herniated disk in the neck - Arthritis (such as osteoarthritis, rheumatoid arthritis) - Cervical (neck) disk degeneration - Congenital (present at birth) abnormalities of the vertebrae and bones How is neck pain diagnosed? Along with a complete medical history and physical exam, diagnostic procedures for neck pain may include the following: - Blood tests. These tests can help determine the diagnosis of inflammatory disease. - Electromyogram (EMG). A test to evaluate nerve function. - X-ray. A diagnostic test which uses invisible electromagnetic energy beams to produce images of bones onto film. - Magnetic resonance imaging (MRI). A diagnostic procedure that uses a combination of large magnets and a computer to produce detailed images of organs and structures within the body; can often determine damage or disease of internal structures within our joints, or in a surrounding ligament or muscle. - Computed tomography scan (also called a CT or CAT scan). A diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce images of the body. A CT scan shows detailed images of any part of the body, including the bones, muscles, fat, and organs. CT scans are more detailed than general X-rays. How is neck pain treated? Specific treatment for neck pain will be determined by your doctor based on: - Your age, overall health, and medical history - Your diagnosis - Extent of the condition - Your tolerance for specific medications, procedures, or therapies - Expectations for the course of the condition - Your opinion or preference Treatment may include: - Medication (to reduce inflammation and control pain) - Physical therapy - Neck brace or immobilization When should I call my health care provider? Treatment for neck pain is recommended when the pain starts to prevent any future injury or damage. Living with neck pain Living with neck pain can be difficult. But the following treatments , often in combination, prove effective both immediately and over time. To manage your neck pain, you may try medications, rest physical therapy and exercise. - Neck pain can range from mild discomfort to disabling, chronic pain. - Neck pain can result from many different causes--from injury, to age-related disorders, or inflammatory disease. - Seeking medical advice as soon as possible after the injury will minimize future damage and inflammation. - Once you have been treated for the initial injury, a program of physical rehabilitation may be necessary. It is important to follow through with your program and exercises to both strengthen and build muscles to support your activities. - Using good body mechanics may prevent future injury. Tips to help you get the most from a visit to your health care provider: - Before your visit, write down questions you want answered. - Bring someone with you to help you ask questions and remember what your provider tells you. - At the visit, write down the names of new medicines, treatments, or tests, and any new instructions your provider gives you. - If you have a follow-up appointment, write down the date, time, and purpose for that visit. - Know how you can contact your provider if you have questions. Online Medical Reviewer: Fincannon, Joy, RN, MSN Date Last Reviewed: © 2000-2016 The StayWell Company, LLC. 780 Township Line Road, Yardley, PA 19067. All rights reserved. This information is not intended as a substitute for professional medical care. Always follow your healthcare professional's instructions.
<urn:uuid:28dbbe26-0c97-4bc2-8964-33239abafb1b>
CC-MAIN-2016-26
http://akrongeneral.staywellsolutionsonline.com/search/85,P00929
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902646
846
2.875
3
This is the first in a series of articles on the root problems of most chronic illnesses: diabetes, heart disease, hypertension, autoimmune disorders, Alzheimer’s disease, chronic fatigue, Parkinson’s disease and early aging. Over the next few months, I will share how oxidative stress, inflammation, hormone imbalance and toxins cause chronic illness and how to prevent early aging and chronic illness. FREE RADICALS AND TORNADOES When we mix oxygen with food, we get energy. As our body transforms oxygen and food into energy, we make “free radicals.” Free radicals include compounds like peroxides and are like little tornadoes that spin off more little tornadoes. These free radical tornadoes go around and damage cells. Free radicals damage the protein and fats in cell membranes, mitochondria — which are the energy factories in the cells — and even sometimes DNA, leading to cancer. It is estimated the average human cell sustains 10,000 hits per day from free radicals. When cells are damaged by free radicals, the body reacts with inflammation. Chronic inflammation can lead to more cell damage. Free radicals lead to a toxic spiral of cell damage, inflammation and cell death. To stay healthy, the body must maintain a healthy balance between formation of free radicals and destruction of free radicals. How does the body do this? It tries to keep the free radicals within the cells and breaks the free radicals down. It uses antioxidants like vitamin C and E to destroy the free radicals and uses natural repair mechanisms to mend damaged cells. COMBATING OXIDATIVE STRESS First, try to avoid toxins like cigarette smoke, pesticides, solvents, ozone and other chemicals that increase free radical production. Second, we must have adequate dietary intake and absorption of antioxidant nutrients found in fruits and vegetables. Basically, eat more plants so your plate has a variety of colors at every meal. Americans’ poor intake of fruits and vegetables means most Americans do not have enough antioxidants to protect them from the damaging effects of free radicals. MEASURING OXIDATIVE STRESS We can actually measure your body’s oxidative stress levels with special lab tests, including glutathione, serum lipid peroxides, 8OhdG and enzymes that increase with oxidative stress. The best defense against oxidative stress is to listen to what your mother always told you: Eat your fruits and vegetables. This means at least five servings a day and 10 or 12 servings are better for maximum health. Next month, we will learn more about how inflammation causes chronic disease. Hopefully, 2012 will be a year to attain better health by understanding how your body works. So dress up your plate and eat a rainbow of fresh fruits and veggies to fight off free radicals. Dr. Steinmetz is a board-certified family medical doctor based in Alexandria who uses conventional and integrative practices. She welcomes reader questions at [email protected].
<urn:uuid:4e37b877-a1f9-4f4d-9cf2-bee2507f8bf1>
CC-MAIN-2016-26
http://alextimes.com/2012/01/long-live-you-new-year-new-you-oxidative-stress/?leadid=10&form=entry/10/145/entry/10/120/entry/10/68/entry/10/77/entry/10/68/entry/10/54/entry/10/56/entry/10/75/entry/10/61/entry/10/77/entry/10/79/entry/10/87/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926627
604
3.15625
3
In 1564, in his 50th year, Andreas Vesalius died, alone and unattended, on the remote island of Zante, off the west coast of Greece. This year, 1964, commemorates the quadricentenary of the death of this illustrious and fearless iconoclast, the author of the epochal volume, De Fabrica Humani Corporis. Vesalius, his name derived from the Flemish name Wesel, was born in Brussels in the year 1514. His forebears were closely affiliated with the medical profession: his great-great-grandfather, great-grandfather, and grandfather were physicians, his father was an apothecary. The bequeathal of this medical heritage to Vesalius was manifested in him at an early age. He began his medical studies at Louvain, where he displayed an avid interest in anatomical dissection, and became proficient in Arabic, Greek, and Latin. In 1533, Vesalius left for Paris to continue his medical education. At Paris, the instruction of anatomy was directed by the eminent anatomists,
<urn:uuid:d64a8398-c8b0-4662-9516-6d608d8cfb38>
CC-MAIN-2016-26
http://archinte.jamanetwork.com/article.aspx?articleid=570517
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.986077
230
2.671875
3
Yesterday, the International Energy Agency (IEA) released a report in which it urges the adoption of four approaches to curb greenhouse gas emissions by 2020. In announcing the report the IEA noted that, at the earliest, the next international treaty won't even be finalized before 2015 (and its implementation won't start until 2020). But in the intervening years, we're likely to build infrastructure and continue emissions that will make the goals of that agreement nearly impossible to reach. In the interim, the IEA suggests steps that are needed to keep the planet on a path that would limit emissions to 2°C. The report comes immediately in the wake of the first recordings of carbon dioxide levels that exceed 400 parts per million at Mauna Loa, far from any sites of industrial emissions. These levels haven't been seen in millions of years and, if current emissions trends continue, we're expected to reach temperatures we've not seen in equally long: between 3.6°C and 5.3°C warmer than the preindustrial era, according to the IEA. And, as the World Bank recently noted, that sort of rise would radically reshape our world. So the IEA is sold on the goal of limiting future temperature rises to 2°C. Unfortunately, energy-related emissions went up by 1.4 percent last year to 31.6 Gigatons. If we wait to 2015 to finalize plans to keep future temperature rises to 2°C, the IEA estimates we'll need to spend $5 trillion to get back on track. In contrast, the IEA estimates that we can stay on track by spending $1.5 trillion in the years between now and 2020. If spent according to the IEA's new four-step plan, we'll save just as much money as we spend due to more efficient use of energy. The IEA focused on technologies that are already on the market and are in active use in some countries, meaning that there are no barriers other than cost and scaling. One of the steps is something the IEA has been arguing for a while: given that there is a finite supply of fossil fuels and that burning them creates problems, it makes no sense to actively encourage their use. Despite this, fossil fuel is heavily subsidized in many countries. The IEA has consistently called for these subsidies to be phased out; one of its four points is to simply accelerate the phasing out. While also on the subject of waste, the IEA would like to see oil and gas producers do more to capture methane that is currently allowed to escape into the atmosphere. Methane is a potent greenhouse gas and is eventually converted to CO2 in the atmosphere; capturing more of it will account for 18 percent of the savings. In the US, the expansion of renewables and natural gas has led to a significant decline in the use of coal, which has led to a corresponding drop in carbon emissions (coal is the least efficient fossil fuel in terms of energy per emissions); China's emissions growth is slowing for similar reasons. The IEA would like to see that happen globally. If we limit the construction and use of the least efficient coal plants for the rest of the decade, it could account for 20 percent of the IEA's goals. But the biggest step we can take is simple efficiency. Building or retrofitting more efficient buildings, industry, and transportation could account for nearly half the emission changes needed for the IEA's plan. And this is where most of the money for the plan comes from; efficiency measures can usually save a significant amount on energy expenses, often with time windows of less than a decade. These savings are required to offset the cost of mothballing some of the coal plants before the end of their expected lifespans. These savings will, of course, exact a cost somewhere, primarily in the energy industries. If the IEA's plan were adopted, coal consumption would obviously drop and some fossil fuel reserves that are currently slated for development will not be needed as quickly as expected. As a bit of a sop to the energy industry, the IEA notes that the problems caused by climate change—water shortages, severe storms, sea level rise—will exact a cost on the industry's infrastructure as well. Overall, the IEA's plan seems like a solid one. But the group has been calling for many of these steps for a number of years and responses have been slow. There's definitely an element of the "tragedy of the commons" here. Although it's appealing in general to think that these efficiency measures could allow finite reserves of fossil fuels to last decades longer than they would otherwise, the countries and companies relying on the income from developing them are unlikely to be happy to go along with the plan.
<urn:uuid:e043b90e-4eaf-47db-b69f-065e87702a22>
CC-MAIN-2016-26
http://arstechnica.com/science/2013/06/international-energy-agency-finds-cost-neutral-route-to-major-co2-cuts/?comments=1&post=24679403
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967395
967
3.421875
3
When approaching a complex situation that requires a creative solution, some leading research points to a novel approach to problem solving. Don’t solve the problem based on how you would solve it, but instead pretend to be someone else and solve it from their perspective. Via BPS Digest: According to Evan Polman and Kyle Emich, we’re more capable of mental novelty when thinking on behalf of strangers than for ourselves. This is just the latest extension of research into construal level theory, an intriguing concept that suggests various aspects of psychological distance can affect our thinking style. Across four studies involving hundreds of undergrads, Polman and Emich found…that participants were more likely to solve an escape-from-tower problem if they imagined someone else trapped in the tower, rather than themselves (a 66 vs. 48 per cent success rate). Briefly, the tower problem requires you to explain how a prisoner escaped the tower by cutting a rope that was only half as long as the tower was high. The solution is that he divided the rope lengthwise into two thinner strips and then tied them together. The researchers were careful to consider a range of possible confounding factors, including confidence in our knowledge of ourselves versus others, emotional involvement and feelings of closeness. None of these made much difference to the main result. On the other hand, among participants who tackled the tower problem, it was those who said afterwards that they felt the tower was further away, who tended to have found the solution. This reinforces the researchers’ claim that solving a problem for a stranger is easier because of the feeling of psychological distance that it creates. This concept could be particularly useful for those engaged in negations, mediation, arbitration, etc. By framing your position from the viewpoint of another person, it might be possible to arrive at a more creative solution. It could also be useful in anticipating or predicting what angle an opponent might be positioning towards. So next time you’re about to tackle a problem head on, take a moment to step into someone else’s shoes and think about what they would do to solve the problem, you might just get a different result. Polman E, and Emich KJ (2011). Decisions for Others Are More Creative Than Decisions for the Self. Personality and social psychology bulletin PMID: 21317316
<urn:uuid:e9c03a1e-1cd0-4b5b-bfa1-15b2e63c6f05>
CC-MAIN-2016-26
http://associatesmind.com/2011/03/11/need-to-solve-a-problem-pretend-you%E2%80%99re-someone-else/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956246
478
2.546875
3
American low cost orbital launch vehicle. The Falcon 9 Heavy would consist of a standard Falcon 9 with two additional Falcon 9 first stages as liquid strap-on boosters. The Falcon 9 first stage had been designed to support the additional loads of this configuration, with common tanking and engines across both vehicles. Initial architectural work had begun in 2008, and first availability of the Falcon 9 Heavy would be as early as 2010. LEO Payload: 28,000 kg (61,000 lb) to a 200 km orbit at 28.00 degrees. Payload: 12,000 kg (26,000 lb) to a GTO, 28 deg. Boost Propulsion: Lox/Kerosene. Cruise Thrust: 66.600 kN (14,972 lbf). Cruise Thrust: 6,800 kgf. Cruise engine: Kestrel. Initial Operational Capability: 2010. Status: In development. More... - Chronology... Gross mass: 885,000 kg (1,951,000 lb). Payload: 28,000 kg (61,000 lb). Height: 54.90 m (180.10 ft). Diameter: 3.60 m (11.80 ft). Span: 3.60 m (11.80 ft). Thrust: 15,000.00 kN (3,372,000 lbf). Apogee: 200 km (120 mi). Falcon Falcons are a family of two stage, reusable, liquid oxygen and kerosene powered launch vehicles, designed for cost-efficient and reliable transport of satellites and manned spacecraft to low Earth orbit. The Falcon 1 satellite launcher began launches in 2006, with the Falcon 9 - as large as a Saturn I - flying in 2010. The Falcon series was the only successful project among many attempts to privately develop a low cost launch system since the 1960's. More... LCLV Various independently-funded launch vehicles have been advocated, designed, and even developed over the years. A lot of these are attempts to build low-cost launch vehicles using simpler technology. Often such projects begin based on a low cost liquid fuel technology but end up just trying to sell various combinations of Castor solid fuel stages. These enterprises often discover there's more to coming up with a reliable launch vehicle than slashing together a bunch of 'off the shelf' rocket motors and lighting the fuse.... On the other hand, if there is ever a breakthrough in less expensive access to space, it will come through one of these entrepreneurial schemes... More... Associated Manufacturers and Agencies SpaceX American manufacturer of rockets, spacecraft, and rocket engines. SpaceX, USA. More... Home - Browse - Contact © / Conditions for Use
<urn:uuid:15188c43-d556-449e-801a-407db00bb48d>
CC-MAIN-2016-26
http://astronautix.com/lvs/falheavy.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928812
562
3.03125
3
There is only one fundamental alternative in the universe: existence or non-existence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of “Life” that makes the concept of “Value” possible. It is only to a living entity that things can be good or evil. Only a living entity can have goals or can originate them. And it is only a living organism that has the capacity for self-generated, goal-directed action. On the physical level, the functions of all living organisms, from the simplest to the most complex—from the nutritive function in the single cell of an amoeba to the blood circulation in the body of a man—are actions generated by the organism itself and directed to a single goal: the maintenance of the organism’s life. An organism’s life depends on two factors: the material or fuel which it needs from the outside, from its physical background, and the action of its own body, the action of using that fuel properly. What standard determines what is proper in this context? The standard is the organism’s life, or: that which is required for the organism’s survival. When applied to physical phenomena, such as the automatic functions of an organism, the term “goal-directed” is not to be taken to mean “purposive” (a concept applicable only to the actions of a consciousness) and is not to imply the existence of any teleological principle operating in insentient nature. I use the term “goal-directed,” in this context, to designate the fact that the automatic functions of living organisms are actions whose nature is such that they result in the preservation of an organism’s life. In a fundamental sense, stillness is the antithesis of life. Life can be kept in existence only by a constant process of self-sustaining action. The goal of that action, the ultimate value which, to be kept, must be gained through its every moment, is the organism’s life.
<urn:uuid:89989774-f02c-48b0-bd68-70972e801418>
CC-MAIN-2016-26
http://aynrandlexicon.com/lexicon/life.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942767
536
3.296875
3
Henry Gray (18251861). Anatomy of the Human Body. 1918. maternal blood, and give up to the latter its waste products. The blood, so purified, is carried back to the fetus by the umbilical vein. It will thus be seen that the placenta not only establishes a mechanical connection between the mother and the fetus, but subserves for the latter the purposes of nutrition, respiration, and excretion. In favor of the view that the placenta possesses certain selective powers may be mentioned the fact that glucose is more plentiful in the maternal than in the fetal blood. It is interesting to note also that the proportion of iron, and of lime and potash, in the fetus is increased during the last months of pregnancy. Further, there is evidence that the maternal leucocytes may migrate into the fetal blood, since leucocytes are much more numerous in the blood of the umbilical vein than in that of the umbilical arteries. The placenta is usually attached near the fundus uteri, and more frequently on the posterior than on the anterior wall of the uterus. It may, however, occupy a lower position and, in rare cases, its site is close to the orificium internum uteri, which it may occlude, thus giving rise to the condition known as placenta previa. Separation of the Placenta.After the child is born, the placenta and membranes are expelled from the uterus as the after-birth. The separation of the placenta from the uterine wall takes place through the stratum spongiosum, and necessarily causes rupture of the uterine vessels. The orifices of the torn vessels are, however, closed by the firm contraction of the uterine muscular fibers, and thus postpartum hemorrhage is controlled. The epithelial lining of the uterus is regenerated by the proliferation and extension of the epithelium which lines the persistent portions of the uterine glands in the unaltered layer of the decidua. The expelled placenta appears as a discoid mass which weighs about 450 gm. and has a diameter of from 15 to 20 cm. Its average thickness is about 3 cm., but this diminishes rapidly toward the circumference of the disk, which is continuous with the membranes. Its uterine surface is divided by a series of fissures into Iobules or cotyledons, the fissures containing the remains of the septa which extended between the maternal and fetal portions. Most of these septa end in irregular or pointed processes; others, especially those near the edge of the placenta, pass
<urn:uuid:813d8ee1-e2a2-434c-bd42-1b4ef42b8eda>
CC-MAIN-2016-26
http://bartleby.com/107/pages/page64.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943225
549
3.203125
3
Upton Sinclair, ed. (18781968). The Cry for Justice: An Anthology of the Literature of Social Protest. 1915. Essay on Liberty By John Stuart Mill (English philosopher and economist, 18061873) MANKIND can hardly be too often reminded, that there was once a man named Socrates, between whom and the legal authorities and public opinion of his time, there took place a memorable collision. Born in an age and country abounding in individual greatness, this man has been handed down to us by those who best knew both him and the age, as the most virtuous man in it; while we know him as the head and prototype of all subsequent teachers of virtue, the source equally of the lofty inspiration of Plato and the judicious utilitarianism of Aristotle, the two headsprings of ethical as of all other philosophy. This acknowledged master of all the eminent thinkers who have since livedwhose fame, still growing after more than two thousand years, all but outweighs the whole remainder of the names which make his native city illustriouswas put to death by his countrymen, after a judicial conviction, for impiety and immorality. Impiety, in denying the Gods recognized by the State; indeed his accusers asserted (see the Apologia) that he believed in no gods at all. Immorality, in being, by his doctrines and instructions, a corrupter of youth. Of these charges the tribunal, there is every ground for believing, honestly found him guilty, and condemned the man who probably of all then born had deserved best of mankind, to be put to death as a criminal.
<urn:uuid:8ce7e608-6ea1-425d-8bcf-6c01e67b6e73>
CC-MAIN-2016-26
http://bartleby.com/71/0606.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975836
334
3.015625
3
E. Cobham Brewer 18101897. Dictionary of Phrase and Fable. 1898. One which shows the new and full moon, with the time of Easter and the movable feasts depending thereon. The reformed calendar of the Church of Rome, introduced by Pope Gregory XIII. in 1582, corrected the error of the civil year, according to the Julian calendar.
<urn:uuid:94819b6f-8466-4051-be64-dd26c6ccf651>
CC-MAIN-2016-26
http://bartleby.com/81/7611.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.876539
78
2.609375
3
A part of the global food crisis is the inefficiency of current irrigation methods. More irrigated water evaporates than reaches the roots of crops, amounting to an enormous waste of water and energy. Tel Aviv University researchers, however, are investigating a new solution that turns the problem upside-down, getting to the root of the issue. They are genetically modifying plants' root systems to improve their ability to find the water essential to their survival. The Root Cause of Wasting Water When it comes to water, every drop counts. "Improving water uptake by irrigated crops is very important," says Prof. Amram Eshel, the study's co-researcher from Tel Aviv University's Plant Sciences Department. His team, with that of Prof. Hillel Fromm, hope to engineer a plant that takes advantage of a newly discovered gene that controls hydrotropism, a plant's ability to send its roots towards water. Scientists in TAU's lab are observing plants that are grown on moist air in the University's lab, making it possible to investigate how the modified plant roots orient themselves towards water. Until now, aeroponics (a method of growing plants in air and mist) was a benchtop technique used only in small-scale applications. The current research is being done on the experimental model plant Arabidopsis, a small flowering plant related to cabbage and mustard. Environmental Consequences Have Economic Consequences Too "Our aim is to save water," explains Prof. Eshel. "We are increasing a plant's efficiency for water uptake. Plants that can sense water in a better fashion will be higher in economic value in the future." There can be significant water-saving consequences for farmers around the world. "We are developing plants that are more efficient in sensing water," says research doctoral student Tal Sherman, who is working under Prof. Amram Eshel and Prof. Hillel Fromm. The project is funded by a grant from the Israeli Ministry of Agriculture and Rural Development to Prof. Fromm and Prof. Eshel. Ideas Planted in Darwin's Time In the nineteenth century, scientists were already observing that plant roots naturally seek out the wetter regions in soil. Although the phenomenon is well documented, scientists until recently had no clue as to how the mechanism worked, or how to make it better. New insights from the Tel Aviv University study could lead to plants that are super water seekers, say researchers. |Contact: George Hunka| American Friends of Tel Aviv University
<urn:uuid:549b81ed-83ae-4fa7-820e-ded8fe973772>
CC-MAIN-2016-26
http://bio-medicine.org/biology-news-1/Tel-Aviv-University-researchers-root-out-new-and-efficient-crop-plants-4267-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94954
520
3.8125
4
School’s out. It’s the time of year that was once reserved for kids to help their on their family farms to maintain and pull in the crops. Now, it’s a time of leisure, play and (with any luck) visits to the grandparents. However, research shows that kids lose ground academically during the summer months unless they use the skills they learn in school throughout the year. Some studies suggest that the loss is up to three months of in-classroom work. That’s where a parent, grandparent, favorite uncle or close family can make a real difference in a child’s education. And it doesn’t have to be in a classroom. Get Them Out of the House Local libraries often have summer reading programs for kids of all ages; so sign them up now. It’s a low-cost activity that can be fun for adults and children. Some libraries offer contests and allow the whole family to sign up. And with books in hand, you can work with a child to read them. Create a routine that includes 30 minutes of reading before bed or after lunch if you’re helping with day care. For older children, you might consider issuing a book challenge. Who can read the most books over the summer? Or join them in your City Reads program. Over the summer many library systems pick one book and encourage the entire city to read that book. Often, there are events associated with a City Reads program too. Your local library will have details. Museums are another way to stimulate the mind of a child. Encourage them to ask questions and ask them questions about what they see. In some larger museums there is curriculum available designed especially to engage children in the exhibits. Call or visit the website of your local museum to see what they have available. Buy Them Books for the Summer Live too far away from the kids in your life? Buy them books for the summer. Get the same books for yourself and then spend 20 minutes on the phone reading with them. They want your undivided attention and you want them to be stronger readers. It’s a win-win. Local nonprofits, like AARP Experience Corps and Boys and Girls Clubs often provide free books to students in their programs over the summer too. Make sure to ask the children in your life what they are planning to read over the summer and if they have any new books. Ask them to read those books with you. That can be done over the phone, video chat or in person. The bottom line? You can have a role in helping the kids in your life be prepared for school in the fall … and have fun doing it. Photo credit: Svadilfari via Flickr. Also of Interest - Building a Story – The Roles of Authors and Illustrators - Games for 50+: Grandparents and Grandchildren Find Adventure, Build Memories - Join AARP: Savings, resources and news for your well-being See the AARP home page for deals, savings tips, trivia and more
<urn:uuid:4e902682-1e04-41ee-91f1-fb2a2b647aab>
CC-MAIN-2016-26
http://blog.aarp.org/2013/05/28/making-summer-reading-fun-for-kids-summertime-programs-for-children/print/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957933
635
3.09375
3
Yesterday we featured a book about tarantulas and asked some questions about them, Today we have the answers to those questions, using the illustration. (Photograph by Jason van den Bemd) 1. Find the legs and count them. Are there eight legs? Yes, tarantulas have eight legs like other spiders and arachnids. 2. What are those two appendages in front of the tarantula? Those two shorter appendages at the front of the tarantula are not legs. They are called pedipalps. They are used for various purposes other than walking. 3. Can you find the eyes? Do you know how many eyes a tarantula has? Is this more than, less than or the same number as other spiders? Tarantulas have eight small eyes. That is the same number as most spiders. One exception is the brown recluse, which has only six eyes. 4. Where are the spinnerets to make silk? Tarantulas have spinnerets at the back of the abdomen. In this photograph they only show as a slight bump. Both males and females make silk. They use it to line their burrows. Tarantulas do not make elaborate webs. 5. Is this a male or female spider? How can you tell? Based on the fact this spider was found out wandering around and that it has extensive black coloring on its legs and abdomen, it is reasonable to assume it is a male. Mature male tarantulas have a hook on the tibia of the front leg, which is not visible in the photograph. How did you do? Are you a tarantula expert? Be sure to let me know if you have any other questions about tarantulas.
<urn:uuid:0065d8f9-fda8-4eee-a7be-529ab492bdf5>
CC-MAIN-2016-26
http://blog.growingwithscience.com/2013/09/answers-to-tarantula-questions/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955185
369
2.984375
3
These simple outdoor activities are great for keeping kids of different ages active and entertained, while spending time together! When the warm weather starts and school breaks begin, kids of different ages can suddenly find themselves spending time together . . . and thinking, “What now?” Whether you’re having a multi-family playdate or your kids aren’t sure what they have in common with the other children in the neighborhood, these timeless outdoor activities could be the answer! - Beanbag Toss: Use sidewalk chalk to draw throw lines at different distances from the target, for children of different ages. (You might even let toddlers walk up to the game and drop their bean bag in each hole, or to place their beanbag on the sticky target!) - Water Play: With supplies as simple as a wide plastic bowl (or water table), a collection of cups and a garden sprinkler, older children can experiment and learn about volume and cause-and-effect, while younger kids develop fine motor skills and enjoy a cooling, sensory splash. Of course, one of the coolest things about water is that it’s used to make BUBBLES . . . Let the shimmery magical fun begin! - Hopscotch: Older children can get a game going with a few sticks of chalk and play together, according to traditional directions (toss a marker into the number squares in order, hopping to pick it up each time without stepping out); younger children can take a turn tossing a marker anywhere on the game, or just hopping from square to square. Sidewalk chalk and a little imagination can take outdoor play time almost anywhere! - Sand Play: No matter their skill level, children of all ages can play in the sand together! If your demo-minded toddler and architect big kid have different ideas about how to treat a sandcastle, try introducing play figures instead of building tools for a dino dig, moon landing or other imaginative play. - Nature Walk & Hunt: Each person uses a pair of binoculars (real or crafted out of toilet-paper rolls) or a magnifying glass to find something in the yard. With older children, you can play a guessing game or challenge them to spot smaller objects such as a bird’s nest. Younger children will enjoy playing along, counting items in broader categories, such as trees or flowers. You can easily turn your adventure into a real game with this printable nature bingo game sheet! What are some of your child’s favorite outdoor activities? Join the conversation in the comments section below! Shop the Story At Melissa & Doug we strive to produce the highest quality educational toys for children. From puzzles to puppets, plush to play food, magnetic activities, music and more, Melissa & Doug is one of the leading designers and manufacturers of educational toys and children’s products. Started in 1988 in their garage, Melissa & Doug has something for everyone, with over 1000 unique and exciting products for children of all ages.
<urn:uuid:4740592b-46a2-4e9c-b6eb-2399cccabe0f>
CC-MAIN-2016-26
http://blog.melissaanddoug.com/2012/04/12/5-outdoor-activities-children-of-different-ages-can-share/?like=1&source=post_flair&_wpnonce=f636fc8316
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933796
618
2.75
3
Your kids may soon have a day off from school so their teachers can participate in a Professional Development Day, but do not assume this day need only be for the teachers‘ professional development! Take the day to learn about different professions with your child. It’s a great opportunity for him to see and appreciate the many ways in which people work to earn a living and develop their skills, abilities, and talents. Plus, you never know, he just may find something he wants to be when he grows up and will learn what he needs to do to achieve his goal! To start your Professional Development (P.D.) Day, print off several pages of this worksheet (click here to download) and store them in a folder or binder. This will be where he keeps everything he’s learned about each profession. Since they are blank, he may select any occupation he wishes to observe and record – take it while you are out and about running errands, and he can write about what the cashier does, or what the bank teller’s role is, or why the garbage collector’s job is so important. Head to the library or internet to look up even more information about the jobs he is interested in. You can help to make it even more interesting by gathering up some toys or costumes to help him role-play as many jobs as possible – he can do nothing better to know if he likes a particular profession or not than by learning on the job! While there are a plethora of professions out there to be explored, we’ve been focusing on the following (and they seem to be pretty popular answers to the question “What do you want to be when you grow up?”): Train Engineer: Any mother of boys will most likely come across this at some point in the early years, but I know many girls who also love to play with trains! A train engineer costume and a train set, even if it’s made from paper tracks and a wooden-block train, are perfect for setting the stage. Get to know the different parts of a train. In what ways could they break down or wreck? How might they be fixed? Should the train go fast or slow going down hill? Role-play a day in the life of a train and discover the many important things they need to do, both passenger trains, and freight trains. Cowboy: Be a cowboy or cowgirl for a day! Dress up in a cowboy costume and visit a farm or petting zoo, or simply pull out a favorite farm/animal book or fold & go barn. Anything that will help them learn about animals or crops will work well. What do you know about crop rotation or pivots? Why are they important? What kids of things to the different animals on the farm eat? How often to they need to be fed and what else does and farmer or cowboy need to do to take care of them? Why is it important that we have farms? Fire Chief: Not only is this a good activity for learning about what a fire chief and other firefighters do, but it’s a perfect opportunity to talk about fire safety and how serious the dangers of fire can be. We used this fire rescue set to role-play what happens in a house fire, and then spent some time talking about what our family fire plan would be – what to do (stop, drop, roll), how to get out of the house, etc. Professional Chef: A few months ago, I posted about a Kid’s Restaurant activity in which we dressed up as a chef, learned about different types of food, how they are good for us, and role-played being a chef in a restaurant. To take it a step further, we’ve since made simple kid-friendly recipes in the kitchen to learn more about what foods taste good together as well as some techniques, such as how to measure, how to use a rolling pin, and how to follow a recipe. Construction Worker: Did you know that a career in construction requires more than just manual labor? It can require math for carpenters, computer skills for workers using heavy machinery, and chemistry for welders, amongst many other skills required. So gather a hard hat and construction worker costume and get to work learning about all the ins and outs of specific types of construction. Or simply let him enjoy the satisfaction that comes with having built something himself using building blocks, or a wooden tool kit. Doctor: What does a doctor do? Well, there are many different kinds of doctors. You could learn about and role-play what doctors do generally, but there are variations on what they do specifically depending on the type of doctor you’re researching. Do you want to learn about family doctors, ER doctors, pediatricians, etc? It may be beneficial to first learn about all the different types of doctors before going into what they do and what kind of training and education they require. Police Officer: Young children may think that a police officer’s only role is to put people in jail. And while that is certainly one of their responsibilities, they are also there to make sure people are safe and to enforce the rules. Have your child make a list of rules that need to be followed in your community – obey the speed limit, wear seat-belts, no stealing, etc. While police officers ensure that people are obeying the rules, or laws, of you city they are also there to keep people safe, help people who are hurt, and help find people who may be lost. Veterinarian: At first glance, a veterinarian costume may look like the same thing regular doctors wear. But what is the difference between what they do every day? While doctors help people stay healthy, veterinarians help animals to stay healthy. Many children love animals, so build on that interest and learn about as many different animals as possible. If you have a pet in your house, give your child some responsibility when it comes to taking care of it. If you don’t have a pet in your house, stuffed animals are a great substitute! By exposing your child to the many different professions and their roles in society, he will become more aware of what might be available for him in the future, and also what it takes to make a community run smoothly. Every job is important and is worth learning about! Katie Heap is the author of Live Craft Eat–a place where she writes about her 3 loves: raising her family, her crafting endeavors, and learning to cook. You can subscribe to her blog or follow Live Craft Eat on Pinterest and Facebook. PIN this Post! Shop the Story
<urn:uuid:31cec085-1ec4-455e-bc8d-3beb4d68f4c1>
CC-MAIN-2016-26
http://blog.melissaanddoug.com/2013/10/08/professional-development-day-activities-for-kids/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966145
1,369
3.390625
3
Pruitt-Igoe was taken down in stages, beginning in 1972. Video of the demolition was broadcast around the world. What happened in those brief intervening years is the subject of a new documentary, “The Pruitt-Igoe Myth: An Urban History.” Star-Ledger editorial writer Linda Ocasio spoke with filmmaker Chad Freidrichs. Q. What inspired you to make the documentary? A. My wife, Jaime, and I were decorating our first house and reading a lot about Modernist architecture and design. I came across Pruitt-Igoe while listening to an audio lecture about architecture and the city. Pruitt-Igoe was billed as this giant failure of Modernism, where the design had led to social collapse. When I learned that this housing project was actually in St. Louis, I got really excited because I went to high school on the outskirts of St. Louis, but I had never heard of Pruitt-Igoe. Q. What did you discover in your research? A. I came to understand that there were far deeper issues than the project’s design. Pruitt-Igoe had to be placed in its social, economic, legislative contexts if you really wanted to understand what happened. So, the lesson I took from all of this is the danger of oversimplifying, and the tendency in discussions of Pruitt-Igoe to try to find one scapegoat, especially when it fits neatly into a preformed narrative, whether about architecture, welfare programs or urban poverty. Q. What was the biggest surprise? A. For me — and I think this hits everyone who watches the film — it’s that fathers were encouraged by Missouri welfare policy to leave the home. You had situations where families were destroyed by a public policy. In addition, I learned how deeply embedded segregation was into the city officials’ vision of St. Louis public housing. What else did you learn? Rising crime was an issue, not just in Pruitt-Igoe, but throughout St. Louis and nationwide during these years. The picture of crime-ridden Pruitt-Igoe isn’t the whole picture. Plenty of people in the projects were living relatively normal lives. But most discussions of Pruitt-Igoe focus strictly on the criminal behavior; this stigmatizes the projects and the people who live in them. The broader lesson is that there was a social commitment to public housing initially because it was intended for working- and middle-class people. When public housing became a poor person’s project, that social commitment evaporated. I don’t think that’s an accident. Q. Would anyone build on that scale again? A. In order to understand Pruitt-Igoe’s scale, you have to look at the context. St. Louis wanted to see itself as a big city, like Chicago or New York. The way forward was to build new and to build big. And, in a very practical sense, it was needed. St. Louis was overcrowded in 1945, with migrants streaming in. The city desperately needed more housing. But shortly after Pruitt-Igoe’s construction, the city’s population crashed because of suburbanization. St. Louis lost half of its population in a generation. Rents went down, vacancies went up. And Pruitt-Igoe was deeply hit by these changes. For a variety of reasons, it could no longer attract tenants, so vacancy became a huge issue. Those large, vacant buildings have so much to do with the vandalism and crime that are associated with Pruitt-Igoe. The large scale, by itself, wasn’t the issue. The large scale, relative to the dwindling population, became a huge problem. Q. You gave former residents a chance to recall their happy memories of living in Pruitt-Igoe. A. There’s been a tendency to focus on one extreme or another, usually the negative. We tried to show the good times and the bad. There’s certainly a sadness about its decline. And that downward trajectory can be horrifying. But to say that Pruitt-Igoe was a downward spiral and nothing more is to oversimplify the variety of experience of life lived in the projects.
<urn:uuid:2cb8546c-5652-4eaa-bec4-41cb3fe692e0>
CC-MAIN-2016-26
http://blog.nj.com/njv_editorial_page/2012/02/filmmaker_chad_freidrichs_on_t.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.976926
894
2.59375
3
Stand with Sportsmen and Conservationists for Clean Water Whether you’re an angler, hunter or wildlife viewer — or, perhaps like many of us, are all three — you understand the importance of watery habitats for wildlife. These habitats not only include the obvious, such as larger rivers and lakes, but also the water bodies we don’t see as often. Headwater streams provide important spawning and rearing habitat for fish, while wetlands are utilized for breeding, rearing, and migrating by waterfowl. Both wetlands and headwater streams help provide important drinking water for wildlife and people of all shapes and sizes.Unfortunately, over the past decade, safeguards for many streams, lakes and wetlands have steadily eroded. Two Supreme Court decisions and subsequent administrative guidance in the 2000s removed Clean Water Act protections for at least 20 million acres of wetlands, allowing for the pollution and degradation of much critical habitat that was previously protected. The recently proposed rule by the Environmental Protection Agency and U.S. Army Corps of Engineers ensures the Clean Water Act once again safeguards many — but not all — streams, lakes, and wetlands that had lost protections due to the court decisions. There’s a great deal of misinformation out there about what the rule actually does and does not do. Attacks against the proposed rule are primarily based on exaggeration and extreme worst-case scenarios, and often have very little grounding in actual fact. Many are just plain false. Politically, these attacks have included a proposed rider to a Senate spending bill, a recently introduced Senate bill, and a charged House hearing on the subject, among others. Despite what the opposition says, the proposed rule doesn’t protect all waters of the U.S. and doesn’t change the definition of navigability. It does provide exemptions for many farming, timber and other land-use activities, and leaves many important waters at risk to these operations. The proposed rule is a compromise solution that restores protections for some wildlife habitats but also ensures many protections for landowners. Getting this proposed rule passed and implemented is an important battle for everyone that cares about wildlife and clean water. It’s also vitally important that sportsmen and sportswomen be involved. Rarely have so many sporting organizations come together on a common cause as they are with this one. Back in 2008, Congress had the opportunity to act, but didn’t, and again sportsmen are united to achieve administratively what Congress has been unable or unwilling to do legislatively. If you’re an angler and/or hunter wondering if your voice matters, you need look no further than this struggle. Know that you do matter and can make a difference, even during these days of political polarization and corporate spending. Sportsmen engagement likely kept the Senate rider from derailing the proposed rule in June. In fact, shortly after the rider was defeated, President Obama, said “I am going to stand with sportsmen and conservationists against members of Congress who want to dismantle the Clean Water Act.” It’s been decades since there’s been a federal conservation action as significant for trout fishing and duck hunting as the current proposed Clean Water Act rule. These clarifications of an already existing law are currently open for public comment and are already under heavy attack, so now is a critical time to show your support for the habitat ducks, fish, and our outdoor traditions depend on. Take a stand with other sportsmen and conservationists: support the proposed Clean Water Act rule.
<urn:uuid:98ed4d33-63ce-465e-9376-315ec5dd2a6b>
CC-MAIN-2016-26
http://blog.nwf.org/2014/07/sportsmen-cleanwateract/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966729
717
2.859375
3
Alright, the title is a little of a stretch. However, the efforts of Mackenzie Cowell and Jason Bobe have demonstrated that hacking biology can and may some day become as common as hacking away on your computer (think positive!). A recent article in the Boston Globe lists some of the issues related to doing biological work from home and how there are rules and regulations that need updating and revision. Unlike hacking on your computer, biological work usually generates waste that may (or may not) be harmful and work is not as “straight-forward” as coding. The movement is getting much of its steam from synthetic biology, a field of science that seeks to make working with cells and genes more like building circuits by creating standardized biological parts. Although it’s not as simple as it’s portrayed, synthetic biology does allow for genetic transformation of bacteria and other types of cells with great ease, however it should not be something taken lightly and performed by amateurs. Or should it? What is your opinion about DIY Bio? I’d love to hear your opinion.
<urn:uuid:af4edfd3-a451-408f-aadc-8bc65058f0c2>
CC-MAIN-2016-26
http://blog.openwetware.org/community/tag/hacking/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967987
223
2.625
3
When you’re a new parent rocking a screaming infant in your arms at 2 a.m., you begin to wonder, Will he ever sleep through the night? A new study indicates that most babies are capable of “sleeping through the night” by three months. (This means they’re sleeping between five and eight hours. Anyone find this hard to believe?) In the past most pediatricians have told parents they should expect their babies to get a full night’s sleep by 12 months, so the findings in this study are surprising. The researchers asked 75 new parents to keep diaries tracking their babies’ sleep habits for six days each month for a year. Parents were also invited to shoot time-lapse video of their children’s sleep so researchers could monitor the accuracy of the diaries. The study’s purpose was to determine whether new babies could actually sleep through the night. Researchers looked at three different criterion: Sleeping uninterrupted from midnight till 5 a.m., sleeping uninterrupted for eight hours, or sleeping uninterrupted from 10 p.m to 6 a.m. More than half of the infants were able to sleep from 10 p.m. to 6 a.m. by five months. The researchers didn’t collect data on whether the infants were breast- or bottle-fed. Nor did they track the specific methods and techniques parents were using to put their children to sleep. Although the researchers did conclude that parents should develop a sleep routine by the time an infant reaches 1 month. The study was published in the November edition of the journal Pediatrics.
<urn:uuid:37a58164-bfa0-4cfd-a320-17d9a794baec>
CC-MAIN-2016-26
http://blog.sfgate.com/sfmoms/2010/10/26/when-should-a-baby-sleep-through-the-night-a-new-study-offers-the-answer/?gta=commentlistpos
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973669
330
2.765625
3
The Washington Post has noted the Neanderthal research of SMU archaeology graduate student Metin I. Eren in a new article “Neanderthals reimagined” that looks out the changing scientific interpretation of humans ancestors. Reporter Marc Kaufman in the Oct. 5 article Neanderthals reimagined cites Eren’s 2007 research as some of the scientific evidence showing Neanderthals were smarter than once thought, and more like sisters and brothers to modern humans, rather than cousins, as previously perceived. By Marc Kaufman The Washington Post Scientists are broadly rethinking the nature, skills and demise of the Neanderthals of Europe and Asia, steadily finding more ways that they were substantially like us and quite different from the limited, unchanging and ultimately doomed inferiors most commonly described in the past. The latest revision involves Neanderthals who lived in southern Italy from about 42,000 to 35,000 years ago, a group that had to face fast-changing climate conditions that required them to adapt. And that, says anthropologist Julien Riel-Salvatore, is precisely what they did: fashioning new hunting tools, targeting more-elusive prey and even wearing identifying ornaments and body painting. Traditional Neanderthal theory has it that they changed their survival strategies only when they came into contact with more-modern early humans. But Riel-Salvatore, a professor at the University of Colorado at Denver writing in the Journal of Archaeological Method and Theory, says that was not the case in southern Italy. “What we know is that the more-modern humans lived in northern Italy, more-traditional Neanderthals lived in middle Italy, and this group that adapted to a changing world was in the south — out of touch with the northern group,” he said. … Research debunking the position that Neanderthals were “cognitively inferior” comes from Daniel Adler of the University of Connecticut and Metin Eren of Southern Methodist University. In 2006, Adler described evidence that Neanderthals hunted just as well as Homo sapiens, even if their weapons were less sophisticated. In 2007, Eren replicated the making of Neanderthal disc-shaped tools, or “flakes,” and found they were in some ways more efficient than Homo sapiens’ blade-based tools. Both researchers said that while the Neanderthals did not make the transition to more advanced tools — which generations of researchers saw as proof of Homo sapiens’ superiority — they were nonetheless well adapted to their environment.
<urn:uuid:3f7d1b8d-7050-4b25-b704-cc216ca2aca4>
CC-MAIN-2016-26
http://blog.smu.edu/research/2010/10/05/the-washington-post-evidence-increases-that-neanderthals-more-closely-linked-to-humans/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961179
533
3.453125
3
UI, UX, IA, IxD, UCD, HCI . . . There seems to be a lot of confusion around User Experience Design, what it does, why it is important and how it is different from User Interface Design. This last point is often combined with calls for rules or SOPs to guide developers in creating good user experiences. While there are certainly guidelines and design patterns for creating digital interfaces, these are by no means prescriptive. User Experience Design is about crafting the potential for a great interaction between a user and his/her tool. This interaction, however, is highly dependent on the situation; the environment, type of user, interaction modes, visual language, and especially the user themselves. The consequence of this web of interconnected dependencies is that there are no hard and fast rules. There are, however, some concepts which all User Experience and Interaction Designers strive to achieve through the use of the visual elements and principles of design. This is an fantastic talk and interesting article on internet personalization filtering that happens automatically. By doing this ‘invisible’ filtering, search engines are usurping the control from the user. While the smart algorithm filtering is very helpful, it has to be made visible. Any time a system utilizes automation, that automation must be communicated to all parts of the system – especially the human user (see airplane crashes for the result of that lack of communication in extreme cases). Users often interact with similar content in different ways. Online news content is a good area to see this clearly. And it is getting a lot of attention these days following the release of the Pew Report on “Navigating News Online.” Some people are casual browsers, visiting sites and stories suggested by their social network. Some people are occasional users, checking in with a news website once a week. There are mobile users who consume immense amounts of content through their mobile device. Others are power users – super consumers who visit several sites, get massive amounts of content, often on a computer, and share that content with others. Then there are people like me. I consider myself a multi-channel regular user: I consume news content in several different ways (email digest, through websites, rss feeds, and apps on my phone) at least once a day. I never though about why I interacted with similar sites and content in different ways until reading some of the recent articles about the ways people consume news. How can we know if interaction design is successful? Being an interaction designer, I think about this often. Moreover, I think about how we discuss these questions. Many designers evaluate designs on an instinctive level and when they attempt to externalize their thoughts, it ends up sounding like “I just know users will hate that.” How then do designers discuss interactions in a credible way? How does one evaluate interaction design? Why do I know users will hate that (whatever that is)? How Mere Words Can Shape Enterprise Trajectory Businesses at all levels are beginning to actively engage with the problems of user experience because they have seen the profound effects of excellent interactions with customers: more re-purchasers, more customer referrals, greater customer loyalty, etc. The list of benefits goes on and on. I’m not going to discuss the need for interaction design and customer experience management right now. Research shows that “90% of companies feel that customer experience is very important or critical in 2010, and 80% intend to use it as competitive differentiation.”1 Most companies understand that need and those that don’t will soon be left behind. Hello world indeed! I’m just getting this blog set up, so it may be a few days before my first real post. The purpose of this blog is to discuss interaction design and it’s impact on user and customer experience, especially within enterprise environments. I’ll be blogging from my perspective as an Enterprise Interaction Designer with Adobe Systems, but remember, this is a personal blog, so my views do not necessarily reflect the views of Adobe. One of my goals with this blog is to impact the discussion of users here at Adobe, but also participate in the greater conversation on interaction in the design world, so comments (that can add to our conversation) are welcome. I look forward to sharing with you.
<urn:uuid:a64e8721-7999-4194-9264-fdf70fefcc97>
CC-MAIN-2016-26
http://blogs.adobe.com/interactiondesign/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94841
888
2.59375
3
The evolutionary history of lions is opaque, as most of the sub-species that once roamed the Old World are now extinct. However new research, published in BMC Evolutionary Biology, uses ancient DNA from extinct lions to piece together the gaps in their history. The findings provide both a new understanding of the lion’s past and may provide new insight into how to conserve what remain for the future. Lions once roamed across the world. Until relatively recently, various sub-species could be found across Africa and all the way from the Indian subcontinent, through the Middle East and into modern day Greece and Turkey. Visitors to the British Museum in London can see engravings from Assyria (in modern day Iraq) made as recently as 635BC of large scale lion hunting by the local people (see picture). Sadly these hunts, and others like them, were rather too successful. Along with increasing human encroachment on their habitats, persecution has resulted in the virtual extinction of lions outside of sub-Saharan Africa. The Asian lion lives on only through a small and highly endangered Indian population of approximately 400 individuals; all other lions outside of sub-Saharan Africa are extinct. These recent extinctions create problems for our understanding of the evolutionary history of lions. Such an understanding is desirable not just to satisfy our curiosity about these charismatic animals, but also to help focus efforts for conserving those lions that still remain. Yet sampling only those lion populations that are still in existence inevitably gives an incomplete – and perhaps misleading – impression of this species history. Well then, suggests new research recently published in BMC Evolutionary Biology, why not use what remains of extinct lions to fill the gaps in their history? Origins of the ancients A worldwide team of researchers, led by Ross Barnett of Durham University, searched museums across Europe to locate bone or tissue specimens from extinct lions, in order to extract their DNA. Samples from extinct lions originally living in West, Central and North Africa, as well as some from the Middle East, were tracked down, of which a number had somewhat unusual origins. Two skull fragments from North African Barbary lions, now housed in London’s Natural History Museum, were originally found underneath the Tower of London during building work, presumably remnants of the Tower’s medieval Royal menagerie. Extracting mitochondrial DNA (mtDNA) from these museum samples and combining the DNA with 74 previously published mtDNA sequences from existing lion populations in Africa and Asia enabled the authors to estimate both the relatedness of these different populations (ancient and modern) as well as the likely dates they separated from one another. Consistent with previous analyses, the researchers find that the most likely origin of lions is eastern-southern Africa. What’s new is the estimated date for the migration of lions out of Africa and into Asia. Previous research, based only on existing populations, suggested that this occurred at least 74,000 years ago and perhaps as long as 500,000 years ago. This new work suggests that the exodus of lions out of Africa occurred a mere 21,000 years ago. The additional use of ancient DNA from extinct populations has then resulted in substantially different conclusions from studies using living lions only. Out of Africa This data also allows the likely course of this migration to be mapped. The results suggest that, having initially arisen in East Africa, lions migrated to West and Central Africa approximately 120,000 years ago; climatic change then separated these two populations. The authors analysis suggests these western populations were the pioneers who migrated into North Africa and from there into Asia, Europe and the Middle East (see figure 3 for more details). Genetically then, modern day lion populations in West and Central Africa appear to be more closely related to existing Asian Lions (and all those now extinct populations in-between) than they are to East African lions. Given that these West and Central African lion populations are close to extinction in the wild, this has potentially important implications for lion conservation. Currently, all African lions are considered to represent just one conservation unit, with Asian lions making up a second. The authors strongly suggest that this dichotomy now needs to be revised. Central and West African lions are actually more closely related to Asian lions and, to preserve as much lion diversity as possible, extra attention needs to be given to these populations. This research then, does not just change our perceptions of the lion’s past but may also change our perception of how to safeguard its future as well. Ancient DNA comes of age These results come with caveats. All these findings are based on maternally inherited mitochondrial DNA which, as the researchers readily admit, cannot be considered as accurate as phylogenetic analyses based on biparentally inherited nuclear DNA. This is a particular concern in species like lions, where it is males that migrate between populations, while females usually remain with the group of their mother. Of course the reason for using mtDNA is that it is easier to extract successfully from difficult samples like bone fragments than nuclear DNA. Perhaps in the future next generation sequencing techniques will enable the extraction of nuclear DNA from such samples with sufficient quality to confirm the authors results. Even without this confirmation, it remains extraordinary that we can reach back into the past and utilise the DNA of animals dead many centuries, whose entire sub-species are no longer found anywhere on earth. Ancient DNA studies have the potential to explain the diversity of the past and, perhaps, help us conserve it for the future.
<urn:uuid:5f1e5008-b260-481d-9ccc-65548d5e6da7>
CC-MAIN-2016-26
http://blogs.biomedcentral.com/bmcseriesblog/2014/04/08/ancient-dna-reveals-the-lions-past-and-perhaps-future/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957016
1,115
3.84375
4
The glaciers that shine at the top of Mount Kilimanjaro, the highest peak in Africa, could vanish entirely within 15 years, according to a somber new report. Says glaciologist Lonnie Thompson: “Of the ice cover present in 1912 … 85% has disappeared and 26% of that present in 2000 is now gone” [USA Today]. The mountaintop glaciers are both shrinking around the edges and growing thinner, Thompson’s team found. If the current rate of ice loss continues, the mountain could be ice free as early as 2022. Thompson says his team has fresh evidence that global warming is to blame. As similar changes are occurring on other mountains in Africa, South America, and in the Himalayas, Thompson says that global climate change, not local weather effects, must be responsible for the receding ice. “The fact that so many glaciers throughout the tropics and subtropics are showing similar responses suggests an underlying common cause,” Thompson said [AP]. For the study, published in the Proceedings of the National Academy of Sciences, the researchers used maps, aerial photographs, and satellite images to track the ice’s retreat over the last century, and also looked at data from instruments implanted in the glaciers in 2000. Some previous researchers have argued that Kilimanjaro’s glaciers are disappearing because of what they viewed as local factors, namely less snowfall and more sublimation, which turns ice directly into water vapor. But Thompson found that higher temperatures are melting the ice, and he also argues that the drier and less cloudy conditions leading to sublimation on Tanzania’s Kilimanjaro are part of a suite of changes driven by global warming. “You change the temperature profile of this planet, you are going to change precipitation and cloudiness and humidity and temperature,” he said. “Those are all part of climate change. And so to say that that Kilimanjaro is not responding to global climate change is untrue” [National Geographic News]. If the glaciers disappear entirely, it will make an anachronism of a great piece of literature. The “snows of Kilimanjaro” were made famous in the Ernest Hemingway short story of that name in 1938, in which the main character notices “as wide as all the world, great, high and unbelievably white in the sun, was the square top of Kilimanjaro” [USA Today]. 80beats: 2 Trillion Tons of Polar Ice Lost in 5 Years, and Melting Is Accelerating 80beats: From Yellowstone’s Hills to Walden Pond’s Woods, Evidence of Global Warming 80beats: Global Warming Threatens Tropical Species, Too 80beats: Plants “Climb” Mountains to Escape Global Warming Image: Lonnie G. Thompson / Ohio State University
<urn:uuid:c699fd41-26a8-4139-81e6-d71bbdeff9cf>
CC-MAIN-2016-26
http://blogs.discovermagazine.com/80beats/2009/11/03/the-snows-of-kilimanjaro-could-be-gone-by-2022/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937852
602
3.703125
4
By: Margaret A. Hamburg, M.D. It has been 11 years since Congress passed the Best Pharmaceuticals for Children Act mandating the creation of the Office of Pediatric Therapeutics (OPT) at FDA. But it has been light years in terms of the progress we have made to ensure that children have access to innovative, safe and effective medical products. Increasingly, parents can rest assured that the medications they give their children have been tested—in children—in scientifically necessary and ethical clinical trials. OPT’s Pediatric Advisory Committee has reviewed over 200 products for their safety when used by children. Today, nearly 500 drug and biologic products have been improved by including, in their labeling, information describing safety, effectiveness and, where appropriate, dosing relating to use of the product in children. We’ve come a long way. And as director of OPT since 2003, pediatrician Dianne Murphy, M.D., has led the charge. It is because of her indefatigable work on behalf of children that the American Academy of Pediatrics (AAP) has bestowed on Dr. Murphy its Excellence in Public Service Award (EPSA) “representing the highest honor awarded…to a public servant for distinguished service to the nation’s children, adolescents and young adults.” In the letter informing Dr. Murphy of this honor, AAP says her work at FDA “has profoundly improved the lives of children in the U.S. and around the world through increased access to needed therapeutics.” I couldn’t agree more. For many years before Dr. Murphy began her tenure at FDA, the playing field badly needed leveling when it came to safety in children’s drugs. Very few drugs—even those meant solely for children’s use— were actually tested on children. For one thing, some drug companies, eager to get their new products out while they still held exclusivity (that is, the period of time during which generic drugs cannot be developed), were reluctant to take the time or go to the expense. Many people, too, felt it was unethical to use children as so-called “guinea pigs.” Without the information gained from testing children in clinical trials, health care professionals and parents alike could only guess when trying to gauge the correct dosages for children. This was an enormous problem. Children are vulnerable. Their organs are developing, and they are experiencing physical changes that affect how a drug is metabolized. As Dr. Murphy has often said, guessing at the dose just doesn’t work. Children must be studied if we are to give them safe and effective treatments. The sea change began ten years before her arrival at FDA, when AIDS was entering the public consciousness. Dr. Murphy has told me that at the time, she was treating children in her practice who were dying of AIDS because there were no drugs available specifically to address their unique needs. Based on that hard lesson, a major priority when she came to FDA was to ensure that drugs for children first be tested in children in clinical trials. Dr. Murphy has worked with FDA scientists and reviewers to ensure that pediatric studies are rigorously designed and conducted in accord with current scientific understanding of the characteristics that make children unique. And she has championed FDA’s involvement in the international arena, where FDA has emerged as a leader in regulatory thinking. In achieving this honor, Dr. Murphy is in very good company. She shares it with such distinguished former honorees as First Lady Michelle Obama, the late Sen. Edward M. Kennedy, Rep. Henry Waxman, former FDA Commissioner David A. Kessler, and National Institutes of Health Director Francis S. Collins. At FDA, we offer our congratulations, and our thanks. Margaret A. Hamburg, M.D., is Commissioner of the Food and Drug Administration
<urn:uuid:d16ce6ea-c9da-4566-9949-0e46447e7bd4>
CC-MAIN-2016-26
http://blogs.fda.gov/fdavoice/index.php/2013/05/honoring-an-fda-champion-of-safe-treatments-for-children
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972967
796
2.734375
3
By: Jonathan Goldsmith, M.D., F.A.C.P. If you personally know 100 people living in the U.S., chances are that almost 10 will suffer from some form of a rare disease. If that makes it sound like rare diseases are not actually very rare in this country, that’s because there are 7,000 different rare diseases, 80% of which are caused by faulty genes. A rare disease is defined as a condition that affects fewer than 200,000 people living in the U.S., a country with almost 320 million people. When we do the math, it turns out there are roughly 30 million Americans who suffer from a rare disease. And sadly, about 50% are children. With the vast majority of rare diseases still without FDA-approved treatments, we have recently released a new resource for drug developers — a draft guidance document — designed to help them navigate the difficult and unique challenges of developing and bringing to market new FDA-approved drugs to treat rare diseases. When it comes to finding ways to test new treatments for rare diseases, we often cannot rely on the same methods that we use for testing treatments for more common, well-known diseases, such as diabetes or high blood pressure. Here’s why: In rare diseases, new drug development is especially challenging due to the small numbers of people affected by each disease, the lack of medical understanding of the disorder (because relatively few people suffer from it), and the lack of well-defined study results (endpoints) that can demonstrate that a potential treatment for a rare disease is safe and effective. The new draft guidance is intended to help drug developers create more accurate and timely drug development programs by encouraging - a focus on understanding a disease’s “natural history,” - creation of study designs with clinically meaningful endpoints, - development of evidence needed to establish safety and effectiveness, - and the establishment of drug manufacturing specifications to ensure quality. It is also important to note that FDA regulations provide flexibility in applying regulatory standards because of the many types and intended uses of drugs. Such flexibility is particularly important for treatments for life-threatening and severely-debilitating illnesses and rare diseases. Our guidance document will help us build on the gains we’ve made in helping patients with rare diseases. Since the passage of the Orphan Drug Act in 1983, the number of new requests for orphan designation has continued to rise. In 2014 we saw 469 requests, the highest number of new requests in one year. Also in 2014, an unprecedented 41 percent of all novel new drugs (17 of 41) approved by FDA’s Center for Drug Evaluation and Research were for the treatment of rare diseases. Our guidance document is intended to encourage drug developers to think early on in the process about all aspects of their program — and encourages careful planning which includes a foundation in strong science. Drug developers for rare diseases are often pioneers. Pioneers need maps and tools to guide them. We see this guidance as another important resource to help support their efforts. FDA is committed to working with all drug developers and stakeholders to establish successful drug development programs that include regulatory flexibility, creative approaches and a scientifically sound basis. Jonathan Goldsmith, M.D., F.A.C.P., is FDA’s Associate Director, Rare Diseases Program, Center for Drug Evaluation and Research
<urn:uuid:e18df382-5752-4d24-a847-f008b2c4b0ce>
CC-MAIN-2016-26
http://blogs.fda.gov/fdavoice/index.php/2015/09/another-tool-helping-developers-navigate-the-difficult-road-to-approval-of-drugs-for-rare-diseases/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952317
691
3.453125
3
This past month brought dramatic, though not unexpected relief to 33 drought-stricken counties in Minnesota. The frequent and abundant May rainfall was the most since 2004 and marked an emphatic end to seven months of severe drought in parts of Minnesota. Some areas received over three times the normal May rainfall amounts. Based on aggregate averages from cooperative weather station observations May of 2012 shows a statewide mean value for total rainfall of 5.82 inches. Using similar averaging techniques (though numbers of climate stations vary across years), May of 2012 ranks 4th all-time in average total rainfall for the state. The higher ranked years include: 1908 with 5.94 inches; 1962 with 6.06 inches; and 1938 with 6.24 inches. Besides replenishing soil moisture values, the abundant rainfall helped to restore most Minnesota watersheds to normal or above normal flow volumes. The exceptions seem to be some of the watersheds in the northwest that feed the Red River. They observed somewhat below normal rainfall during the month, For specific flow data on individual watersheds you can use the DNR stream hydrology web site.
<urn:uuid:8cfde8a2-1d80-4d11-bcc5-82e02903c366>
CC-MAIN-2016-26
http://blogs.mprnews.org/updraft/2012/06/statewide_perspective_on_may_r/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929565
221
2.609375
3
IOM hopes landmark trial will help stem child trafficking from Haiti When authorities from the Dominican Republic raided several houses in a poor residential neighbourhood last year in the capital city Santo Domingo, they found 44 children crammed in rooms, some sitting on the floor, others huddled under beds, according to the International Organization for Migration (IOM). After the raid, 22 of the children were identified as victims of child trafficking, and this month two child traffickers received 15-year prison sentences for the smuggling, trafficking and labour exploitation of Haitian children after a historic trial. It is the first time Haitian traffickers have been jailed in the Dominican Republic for trafficking children, IOM said in a statement. “Parents were convinced their children were being taken for a better life in Santo Domingo and even to Miami,” IOM spokesperson Zoe Stopak-Behr told AlertNet, adding that all the money the children earned was taken by their traffickers. IOM returned the children to their families in Haiti, and also provided technical support and training to local state prosecutors during the trial, she added. “The Dominicans had been criticised for some time now for not bringing many trafficking cases to trial,” Stopak-Behr said. “The conviction is extremely important for prevention. It shows that there is a penalty for trafficking and that the Dominican authorities are working. We hope it will have a preventative effect and help stop the constant flow of children into the Dominican Republic.” At least 2,000 Haitian children were trafficked across the porous and poorly controlled border between Haiti and the Dominican Republic in 2009, according to UNICEF, the United Nations Children’s Fund. Haitian children are a frequent sight on the streets of Santo Domingo, and are often seen begging, shoe shining and washing car windscreens at traffic lights. Thousands of Haitian child domestic servants – known in Haitian Creole as “restaveks” – are also thought to be working in the Dominican Republic, according to IOM. UNPOL, the U.N. Police Division, says it has stepped up patrols along the 366 km (227 mile) border to combat child trafficking, a problem that worsened following the massive 7.0 magnitude earthquake that hit Haiti in 2010, according to both UNICEF and IOM. The disaster left hundreds of thousands of families homeless and pushed countless more Haitians into extreme poverty, forcing more families to send their children to Haiti’s wealthier Caribbean neighbour in search of work and a better life. “The number of cases of child victims of trafficking identified has most certainly increased since the 2010 earthquake,” Stopak-Behr said. “It’s impossible to determine whether this is due to an actual increase of human trafficking or just the result of greater attention and training on the subject,” she added. (Editing by Julie Mollins) Picture credit: An immigrant child from Haiti cleans the windshield of a car before asking for money on the streets of Santo Domingo, June 29, 2011. REUTERS/Eduardo Munoz
<urn:uuid:431af288-1164-4f1a-9b80-4d9bab4c7f34>
CC-MAIN-2016-26
http://blogs.reuters.com/the-human-impact/2012/06/14/iom-hopes-landmark-trial-will-help-stem-child-trafficking-from-haiti/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95611
651
2.59375
3
About the TNM System The TNM system is the most widely used means for classifying the extent of cancer spread. TNM Classification of Malignant Tumours, Seventh Edition provides the new, internationally agreed-upon standards to describe and categorize cancer stages and progression. This guide contains important new and updated organ-specific classifications that oncologists and other professionals who treat patients with cancer must use to adequately classify tumours for prognosis and treatment. This introduction provides a history of the TNM system, the principles of the classification of cancers and general rules of the TNM system applicable to all sites. Headings used in the TNM system to classify tumours for specific anatomical regions and sites are also provided with definitions. The History of the TNM System The TNM System for the classification of malignant tumours was developed by Pierre Denoix (France) between the years 1943 and 19521. In 1950, the UICC appointed a Committee on Tumour Nomenclature and Statistics and adopted, as a basis for its work on clinical stage classification, the general definitions of local extension of malignant tumours suggested by the World Health Organization (WHO) Sub-Committee on The Registration of Cases of Cancer as well as Their Statistical Presentation2. In 1953, the Committee held a joint meeting with the International Commission on Stage-Grouping in Cancer and Presentation of the Results of Treatment of Cancer appointed by the International Congress of Radiology. Agreement was reached on a general technique for classification by anatomical extent of the disease, using the TNM system. In 1954, the Research Commission of the UICC set up a special Committee on Clinical Stage Classification and Applied Statistics to "pursue studies in this field and to extend the general technique of classification to cancer at all sites." In 1958, the Committee published the first recommendations for the clinical stage classification of cancers of the breast and larynx and for the presentation of results3. A second publication in 1959 presented revised proposals for the breast, for clinical use and evaluation over a 5-year period (1960-1964)4. Between 1960 and 1967, the Committee published nine brochures describing proposals for the classification of 23 sites. It was recommended that the classification proposals for each site be subjected to prospective or retrospective trial for a 5-year period. In 1968, these brochures were combined in a booklet, the Livre de Poche5 and a year later, a complementary booklet was published detailing recommendations for the setting-up of field trials, for the presentation of end results and for the determination and expression of cancer survival rates6. The Livre de Poche was subsequently translated into 11 languages. In 1974 and 1978, second and third editions7 were published containing new site classifications and amendments to previously published classifications. The third edition was enlarged and revised in 1982. It contained new classifications for selected tumours of childhood. This was carried out in collaboration with La Société Internationale d'Oncologie Pédiatrique (SIOP). A classification of ophthalmic tumours was published separately in 1985. Over the years some users introduced variations in the rules of classification of certain sites. In order to correct this development, the antithesis of standardization, the national TNM committees in 1982 agreed to formulate a single TNM. A series of meetings was held to unify and update existing classifications as well as to develop new ones. The result was the fourth edition of TNM9. In 1993, the project published the TNM Supplement10. The purpose of this work was to promote the uniform use of TNM by providing detailed explanations of the TNM rules with practical examples. It also included proposals for new classifications, and optional expansions of selected categories. A second edition appeared in 2001.11. In 1995, the project published Prognostic Factors in Cancer12, a compilation and discussion of prognostic factors in cancer, both anatomic and nonanatomic, at each of the body sites. This was expanded in the second edition in 2001 with emphasis on the relevance of different prognostic factors. The subsequent third edition in 2006 attempted to refine this by providing evidence-based criteria for relevance. The present seventh edition of TNM Classification contains rules of classification and staging that correspond with those appearing in the seventh edition of the AJCC Cancer Staging Manual (2009) and have approval of all national TNM committees. The UICC recognizes the need for stability in the TNM classification so that data can be accumulated in an orderly way over reasonable periods of time. Accordingly, it is the intention that the classifications published in this booklet should remain unchanged until some major advances in diagnosis or treatment relevant to a particular site requires reconsideration of the current classification. To develop and sustain a classification system acceptable to all requires the closest liaison between national and international committees. Only in this way will all oncologists be able to use a ‘common language’ in comparing their clinical material and in assessing the results of treatment. While the classification is based on published evidence, in areas of controversy it is based on international consensus. The continuing objective of the UICC is to achieve common consent in the classification of anatomical extent of disease. 1. The General Rules of the TNM System1,21.1. General Rule No. 1 All cases should be confirmed microscopically. Any cases not so proved must be reported separately. Microscopic confirmation of choriocarcinoma is not required if the hCG is abnormally elevated. 1.2. General Rule No. 2 Two classifications are described for each site, namely: Biopsy provides the diagnosis, including histological type and grade. The clinical assessment of tumour size should not be based on the biopsy. In general, the cTNM is the basis for the choice of treatment and the pTNM is the basis for prognostic assessment. In addition, the pTNM may determine adjuvant treatment. Comparison between cTNM and pTNM can help in evaluating the accuracy of the clinical and imaging methods used to determine the cTNM. Therefore, it is important to retain the clinical as well as the pathological classification in the medical record. A tumour is primarily described by the clinical classification before treatment or before the decision not to treat. In addition, a pathological classification is performed if specific requirements are met (see Introduction). Therefore, for an individual patient there may be a clinical classification, e.g., T2N1M0 and a pathological classification, e.g., pT2pNXpMX. 1.3. General Rule No. 3 After assigning T, N and M and/or pT, pN and pM categories, these may be grouped into stages. The TNM classification and stage grouping, once established, must remain unchanged in the medical records. The clinical stage is essential to select and evaluate therapy, while the pathological stage provides the most precise data to estimate prognosis and calculate end results. After two surgical procedures for a single lesion, the pTNM classification should be a composite of the histological examination of the specimens from both operations. Example. Initial endoscopic polypectomy of a carcinoma of the ascending colon is classified pT1pNXpMX; the subsequent right hemicolectomy contains two lymph nodes with tumour, and a suspicious metastatic focus in the liver, later found to be a haemangioma, is excised-pT0pN1pM0. The definitive pTNM classification consists of the results of both operative specimens-pTlpN1pM0 (stage III). For final stage grouping clinical and pathological data may be combined when only partial information is available in either the pathological classification or the clinical classification. The example on p. 2 is expressed as pT2cN1cM0 (stage III). For further discussion on the meaning and application of X (e.g. NX, MX). 1.4. General Rule No. 4 If there is doubt concerning the correct T, N or M category to which a particular case should be allotted, then the lower (i.e., less advanced) category should be chosen. This will also be reflected in the stage grouping. If there are different results from different methods, the classification should be based on the most reliable method of assessment. Example. Colorectal carcinoma, preoperative examination of the liver: sonography, suspicious, but no evidence of metastasis; CT, evidence of metastasis. The results of CT determine the classification-Ml. However, if CT were negative, the case would be classified M0. 1.5. General Rule No. 5 |In the case of multiple simultaneous tumours in one organ, the tumour with the highest T category should be classified and the multiplicity or the number of tumours should be indicated in parentheses, e.g., T2(m) or T2(5). In simultaneous bilateral cancers of paired organs, each tumour should be classified independently. In tumours of the thyroid, liver, ovary, and fallopian tube, multiplicity is a criterion of T classification.| The following apply to grossly recognizable multiple primary simultaneous carcinomas at the same site. They do not apply to one grossly detected tumour associated with multiple separate microscopic foci. Multiple synchronous tumours in one organ may be: For (a) the multiplicity should be indicated by the suffix "(m)", e.g. Tis(m). For (b) and (c) the tumour with the highest T category is classified and the multiplicity or the number of invasive tumours is indicated in parentheses, e.g., T2(m) or T2(4). For (c) and (d) the presence of associated carcinoma in situ may be indicated by the suffix "(is)", e.g., T3(m, is) or T2(3, is) or T2(is). For classification of multiple simultaneous tumours in "one organ", the definitions of one organ listed in Table 1 should be applied. The tumours at these sites with the highest T category should be classified and the multiplicity or the number of tumours should be indicated in parentheses, e.g., T2(m) or T2(5). Combining multiple carcinomas of skin should be done only within subsites (C44.1,2, etc). A carcinoma of the skin in subsite C44.3 and a synchronous one in subsite C44.6 and C44.7 should be classified as separate synchronous tumours. Examples of sites for separate classification of two tumours are: Examples for classification of the tumour with the highest T category and indication of multiplicity (m symbol) or numbers of tumours: If a new primary cancer is diagnosed within 2 months in the same site this new cancer is considered synchronous (based on criteria used by the SEER Program of the National Cancer Institute, USA). 2. The Principles of the TNM System The practice of dividing cancer cases into groups according to so-called stages arose from the fact that survival rates were higher for cases in which the disease was localized than for those in which the disease had extended beyond the organ of origin. These groups were often referred to as early cases and late cases, implying some regular progression with time. Actually, the stage of disease at the time of diagnosis may be a reflection not only of the rate of growth and extension of the neoplasm but also of the type of tumour and of the tumour-host relationship. The staging of cancer is hallowed by tradition, and for the purpose of analysis of groups of patients it is often necessary to use such a method. The UICC believes that it is important to reach agreement on the recording of accurate information on the extent of the disease for each site, because the precise clinical description of malignant neoplasms and histopathological classification may serve a number of related objectives, namely - To aid the clinician in the planning of treatment - To give some indication of prognosis - To assist in evaluation of the results of treatment - To facilitate the exchange of information between treatment centres - To contribute to the continuing investigation of human cancer There are many bases or axes of tumour classification: for example, the anatomical site and the clinical and pathological extent of disease, the reported duration of symptoms or signs, the gender and age of the patient, and the histological type and grade. All of these bases or axes represent variables that are known to have an influence on the outcome of the disease. Classification by anatomical extent of disease as determined clinically and histopathologically (when possible) is the one with which the TNM system primarily deals. The clinician's immediate task is to make a judgment as to prognosis and a decision as to the most effective course of treatment. This judgment and this decision require, among other things, an objective assessment of the anatomical extent of the disease. In accomplishing this, the trend is away from "staging" to meaningful description, with or without some form of summarization. To meet the stated objectives a system of classification is needed - whose basic principles are applicable to all sites regardless of treatment; and - which may be supplemented later by information that becomes available from histopathology and/or surgery. 3. The General Rules of the TNM System The TNM system for describing the anatomical extent of disease is based on the assessment of three components: T. The extent of the primary tumour N. The absence or presence and extent of regional lymph node metastasis M. The absence or presence of distant metastasis. The addition of numbers to these three components indicates the extent of the malignant disease, thus: In effect the system is a "shorthand notation" for describing the extent of a particular malignant tumour. The general rules applicable to all sites are as follows: 1. All cases should be confirmed microscopically. Any cases not so proved must be reported separately. 2. Two classifications are described for each site, namely: (a) Clinical classification (Pre-treatment clinical classification), designated TNM (or cTNM). This is based on evidence acquired before treatment. Such evidence arises from physical examination, imaging, endoscopy, biopsy, surgical exploration, and other relevant examinations. (b) Pathological classification (Post-surgical histopathological classification), designated pTNM. This is based on the evidence acquired before treatment, supplemented or modified by the additional evidence acquired from surgery and from pathological examination. The pathological assessment of the primary tumour (pT) entails a resection of the primary tumour or biopsy adequate to evaluate the highest pT category. The pathological assessment of the regional lymph nodes (pN) entails removal of nodes adequate to validate the absence of regional lymph node metastasis (pN0) and sufficient to evaluate the highest pN category. The pathological assessment of distant metastasis (pM) entails microscopic examination. 3. After assigning T, N, and M and/or pT, pN, and pM categories, these may be grouped into stages. The TNM classification and stage grouping, once established, must remain unchanged in the medical records. The clinical stage is essential to select and evaluate therapy, while the pathological stage provides the most precise data to estimate prognosis and calculate end results. 4. If there is doubt concerning the correct T, N, or M category to which a particular case should be allotted, then the lower (i.e., less advanced) category should be chosen. This will also be reflected in the stage grouping. 5. In the case of multiple simultaneous tumours in one organ, the tumour with the highest T category should be classified and the multiplicity or the number of tumours should be indicated in parentheses, e.g., T2 (m) or T2 (5). In simultaneous bilateral cancers of paired organs, each tumour should be classified independently. In tumours of the liver, ovary, and fallopian tube, multiplicity is a criterion of T classification. 6. Definitions of TNM categories and stage grouping may be telescoped or expanded for clinical or research purposes as long as basic definitions recommended are not changed. For instance, any T, N, or M can be divided into subgroups. 4. Anatomical Regions and Sites The sites in this classification are listed by code number of the International Classification of Diseases for Oncology15. Each region or site is described under the following headings: - Rules for classification with the procedures for assessing the T, N, and M categories - Anatomical sites, and subsites if appropriate - Definition of the regional lymph nodes - TNM Clinical classification - pTNM Pathological classification - G Histopathological grading - Stage grouping - Summary for the region or site 5. TNM Clinical ClassificationThe following general definitions are used throughout: 5.1. T - Primary Tumour TX. Primary tumour cannot be assessed T0. No evidence of primary tumour Tis. Carcinoma in situ T1, T2, T3, T4. Increasing size and/or local extent of the primary tumour 5.2. N - Regional Lymph Nodes NX. Regional lymph nodes cannot be assessed N0. No regional lymph node metastasis N1. Regional lymph node metastasis 5.3. M - Distant Metastasis MX. Distant metastasis cannot be assessed M0. No distant metastasis M1. Distant metastasis The categories M1 and pM1 may be further specified according to the following notation: 6. pTNM Pathological Classification The following general definitions are used throughout: 6.1. pT - Primary Tumour pTX. Primary tumour cannot be assessed histologically pT0. No histological evidence of primary tumour pTis. Carcinoma in situ pT1, pT2, pT3, pT4. Increasing size and/or local extent of the primary tumour histologically 6.2. pN - Regional Lymph Nodes pNX. Regional lymph nodes cannot be assessed histologically pN0. No regional lymph node metastasis histologically pN1, pN2, pN3. Increasing involvement of regional lymph nodes histologically - Direct extension of the primary tumour into lymph nodes is classified as lymph node metastasis. - A tumour nodule in the connective tissue of a lymph drainage area without histologic evidence of residual lymph node is classified in the pN category as a regional lymph node metastasis if the nodule has the form and smooth contour of a lymph node. A tumour nodule with an irregular contour is classified in the pT category, i.e., discontinuous extension. It may also be classified as venous invasion (V classification). - When size is a criterion for pN classification, measurement is made of the metastasis, not of the entire lymph node. - Cases with micrometastasis only, i.e., no metastasis larger than 0.2 cm, can be identified by the addition of "(mi)", e.g., pN1(mi) or pN2(mi) 6.3. Sentinel Lymph Node The sentinel lymph node is the first lymph node to receive lymphatic drainage from a primary tumour. If it contains metastatic tumour this indicates that other lymph nodes may contain tumour. If it does not contain metastatic tumour, other lymph nodes are not likely to contain tumour. Occasionally there is more than one sentinel lymph node. The following designations are applicable when sentinel lymph node assessment is attempted: pNX (sn). Sentinel lymph node could not be assessed pN0 (sn). No sentinel lymph node metastasis pN1 (sn). Sentinel lymph node metastasis 6.4. Isolated Tumour Cells Isolated tumour cells (ITC) are single tumour cells or small clusters of cells not more than 0.2 mm in greatest dimension that are usually detected by immunohistochemistry or molecular methods, but which may be verified with H and E stains. ITCs do not typically show evidence of metastatic activity (e.g., proliferation or stromal reaction) or penetration of vascular or lymphatic sinus walls. Cases with ITC in lymph nodes or at distant sites should be classified as N0 or M0, respectively. The same applies to cases with findings suggestive of tumour cells or their components by non-morphologic techniques such as flow cytometry or DNA analysis. These cases should be analysed separately16. Their classification is as follows. pN0. No regional lymph node metastasis histologically, no examination for isolated tumour cells (ITC) pN0(i-). No regional lymph node metastasis histologically, negative morphological findings for ITC pN0(i+). No regional lymph node metastasis histologically, positive morphological findings for ITC pN0(mol-). No regional lymph node metastasis histologically, negative non-morphological findings for ITC pN0(mol+). No regional lymph node metastasis histologically, positive non-morphological findings for ITC Cases with or examined for isolated tumour cells (ITC) in sentinel lymph nodes can be classified as follows: pN0 (i-)(sn). No sentinel lymph node metastasis histologically, negative morphological findings for ITC pN0 (i+)(sn). No sentinel lymph node metastasis histologically, positive morphological findings for ITC pN0 (mol-)(sn). No sentinel lymph node metastasis histologically, negative non-morphological findings for ITC pN0 (mol+)(sn). No sentinel lymph node metastasis histologically, positive non-morphological findings for ITC 6.5. pM - Distant Metastasis pMX. Distant metastasis cannot be assessed microscopically pM0. No distant metastasis microscopically pM1. Distant metastasis microscopically The category pM1 may be further specified in the same way as M1 (see M - Distant Metastasis). Isolated tumour cells found in bone marrow with morphological techniques are classified according to the scheme for N, e.g., M0(i+). For non-morphologic findings "mol" is used in addition to M0, e.g., M0(mol+). 6.6. Subdivisions of pTNM Subdivisions of some main categories are available for those who need greater specificity (e.g., pT1a, 1b or pN2a, 2b). 7. Histopathological Grading In most sites further information regarding the primary tumour may be recorded under the following heading: G - Histopathological Grading GX. Grade of differentiation cannot be assessed G1. Well differentiated G2. Moderately differentiated G3. Poorly differentiated - Grades 3 and 4 can be combined in some circumstances as "G3-4, Poorly differentiated or undifferentiated." - The bone and soft tissue sarcoma classifications also use "high grade" and "low grade." - Special systems of grading are recommended for tumours of breast, corpus uteri, and liver. 8. Additional DescriptorsFor identification of special cases in the TNM or pTNM classification, the m, y, r, and a symbols are used. Although they do not affect the stage grouping, they indicate cases needing separate analysis. m Symbol. The suffix m, in parentheses, is used to indicate the presence of multiple primary tumours at a single site. See TNM rule no. 5. y Symbol. In those cases in which classification is performed during or following initial multimodality therapy, the cTNM or pTNM category is identified by a y prefix. The ycTNM or ypTNM categorizes the extent of tumour actually present at the time of that examination. The y categorization is not an estimate of the extent of tumour prior to multimodality therapy. r Symbol. Recurrent tumours, when classified after a disease-free interval, are identified by the prefix r. a Symbol. The prefix a indicates that classification is first determined at autopsy. 9. Optional Descriptors9.1. L - Lymphatic Invasion LX. Lymphatic invasion cannot be assessed L0. No lymphatic invasion L1. Lymphatic invasion 9.2. V - Venous Invasion VX. Venous invasion cannot be assessed V0. No venous invasion V1. Microscopic venous invasion V2. Macroscopic venous invasion Note: Macroscopic involvement of the wall of veins (with no tumour within the veins) is classified as V2. The C-factor, or certainty factor, reflects the validity of classification according to the diagnostic methods employed. Its use is optional. The C-factor definitions are: C1. Evidence from standard diagnostic means (e.g., inspection, palpation, and standard radiography, intraluminal endoscopy for tumours of certain organs) C2. Evidence obtained by special diagnostic means (e.g., radiographic imaging in special projections, tomography, computerized tomography [CT], ultrasonography, lymphography, angiography; scintigraphy; magnetic resonance imaging [MRI]; endoscopy, biopsy, and cytology) C3. Evidence from surgical exploration, including biopsy and cytology C4. Evidence of the extent of disease following definitive surgery and pathological examination of the resected specimen C5. Evidence from autopsy Example: Degrees of C may be applied to the T, N, and M categories. A case might be described as T3C2, N2C1, M0C2. The TNM clinical classification is therefore equivalent to C1, C2, and C3 in varying degrees of certainty, while the pTNM pathological classification generally is equivalent to C4. 10. Residual Tumour (R) ClassificationThe absence or presence of residual tumour after treatment is described by the symbol R. More details can be found in the TNM Supplement (see footnote 11). TNM and pTNM describe the anatomical extent of cancer in general without considering treatment. They can be supplemented by the R classification, which deals with tumour status after treatment. It reflects the effects of therapy, influences further therapeutic procedures and is a strong predictor of prognosis. The definitions of the R categories are: RX. Presence of residual tumour cannot be assessed R0. No residual tumour R1. Microscopic residual tumour R2. Macroscopic residual tumour 11. Stage GroupingClassification by the TNM system achieves reasonably precise description and recording of the apparent anatomical extent of disease. A tumour with four degrees of T, three degrees of N, and two degrees of M will have 24 TNM categories. For purposes of tabulation and analysis, except in very large series, it is necessary to condense these categories into a convenient number of TNM stage groups. Carcinoma in situ is categorized stage 0; cases with distant metastasis stage IV (except at certain sites, e.g., papillary and follicular carcinoma of thyroid). The grouping adopted is such as to ensure, as far as possible, that each group is more or less homogeneous in respect of survival, and that the survival rates of these groups for each cancer site are distinctive. For pathological stage grouping, if sufficient tissue has been removed for pathologic examination to evaluate the highest T and N categories, M1 may be either clinical (cM1) or pathologic (pM1). However, if only a distant metastasis has had microscopic confirmation, the classification is pathologic (pM1) and the stage is pathologic. 12. Site SummaryAs an aide-mémoire or as a means of reference, a simple summary of the chief points that distinguish the most important categories is added at the end of each site. These abridged definitions are not completely adequate, and the full definitions should always be consulted. 13. Related Classifications Since 1958, WHO has been involved in a programme aimed at providing internationally acceptable criteria for the histologic diagnosis of tumours. This has resulted in the International Histological Classification of Tumours, which contains, in an illustrated multivolume series, definitions of tumour types and a proposed nomenclature. A new series, WHO Classification of Tumours-Pathology and Genetics of Tumours, continues this effort. The publications can be ordered online at www.iarc.fr/who-bluebooks/ or by email, [email protected]. The WHO International Classification of Diseases for Oncology (ICD-O) (see footnote 15) s a coding system for neoplasms by topography and morphology and for indicating behaviour (e.g., malignant, benign). This coded nomenclature is identical in the morphology field for neoplasms to the Systematized Nomenclature of Medicine (SNOMED)17. In the interest of promoting national and international collaboration in cancer research and specifically of facilitating cooperation in clinical investigations, it is recommended that the WHO Classification of Tumours be used for classification and definition of tumour types and that the ICD-O code be used for storage and retrieval of data. |1.||Denoix PF: Bull Inst Nat Hyg (Paris) 1944;1:69. 1944;2:82. 1950;5:81. 1952;7:743. |2.||World Health Organization Technical Report Series, number 53, July 1952, pp. 4748 |3.||International Union Against Cancer (UICC), Committee on Clinical Stage Classification and Applied Statistics: Clinical stage classification and presentation of results, malignant tumours of the breast and larynx. Paris; 1958. |4.||International Union Against Cancer (UICC), Committee on Stage Classification and Applied Statistics: Clinical stage classification and presentation of results, malignant tumors of the breast. Paris; 1959. |5.||International Union Against Cancer (UICC): TNM Classification of malignant tumours. Geneva; 1968. |6.||International Union Against Cancer (UICC): TNM General Rules. Geneva; 1969. |7.||International Union Against Cancer (UICC): TNM Classification of malignant tumours. 2nd ed. Geneva; 1974. |8.||International Union Against Cancer (UICC): TNM Classification of malignant tumours. 3rd ed. Harmer MH, ed. Geneva; 1978. Enlarged and revised 1982. |9.||International Union Against Cancer (UICC): TNM Classification of malignant tumours. 4th ed. Hermanek P, Sobin LH, eds. Berlin, Heidelberg, New York: Springer Verlag; 1987. Revised 1992. |10.||International Union Against Cancer (UICC): TNM Supplement 1993. A commentary on uniform use. Hermanek P, Henson DE, Hutter RVP, Sobin LH, eds. Berlin, Heidelberg, New York: Springer Verlag; 1993. |11.||International Union Against Cancer (UICC): TNM Supplement. A commentary on uniform use. 2nd ed. Wittekind Ch, Henson DE, Hutter RVP, Sobin LH, eds. New York: Wiley; 2001. |12.||International Union Against Cancer (UICC): Prognostic factors in cancer. Hermanek P, Gospodarowicz MK, Henson DE, Hutter RVP, Sobin LH, eds. Berlin, Heidelberg, New York: Springer Verlag; 1995. |13.||International Union Against Cancer (UICC): Prognostic factors in cancer. 2nd ed. Gospodarowicz MK, Henson DE, Hutter RVP, O'Sullivan B, Sobin LH,Wittekind Ch, eds. New York: Wiley; 2001. |14.||Greene FL, Page D, Morrow M, Balch C, Haller D, Fritz A, Fleming I, eds. AJCC Cancer Staging Manual, 6th ed. New York: Springer; |15.||Fritz A, Percy C, Jack A, Shanmugaratnam K, Sobin L, Parkin DM, Whelan S, eds. WHO International Classification of Diseases for Oncology ICD-O, 3rd ed. Geneva: WHO; 2000. |16.||Hermanek P, Hutter RVP, Sobin LH, Wittekind Ch. Classification of isolated tumor cells and micrometastasis. Cancer 1999;86:26682673. |17.||SNOMED International: The systematized nomenclature of human and veterinary medicine, Northfield, Ill: College of American Pathologists, http://snomed.org
<urn:uuid:81836099-d8a7-4373-a1c5-8f3bca47a7bd>
CC-MAIN-2016-26
http://cancerstaging.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.878061
7,069
2.875
3
Simplicity is crucial to design optimization at nanoscale February 3, 2009 By Denise Brehm Civil & Environmental Engineering MIT researchers who study the structure of protein-based materials with the aim of learning the key to their lightweight and robust strength have discovered that the particular arrangement of proteins that produces the sturdiest product is not the arrangement with the most built-in redundancy or the most complicated pattern. Instead, the optimal arrangement of proteins in the rope-like structures they studied is a repeated pattern of two stacks of four bundled alpha-helical proteins. This composition of two repeated hierarchies (stacks and bundles) provides great strength—the ability to withstand mechanical pressure without giving way—and great robustness—the ability to perform mechanically, even if flawed. Because the alpha-helical protein serves as the building block of many common materials, understanding the properties of those materials has been the subject of intense scientific inquiry since the protein's discovery in the 1940s. In a paper published in the Jan. 27 online issue of Nanotechnology, Markus Buehler and Theodor Ackbarow describe a model of the protein’s performance, based on molecular dynamics simulations. With their model they tested the strength and robustness of four different combinations of eight alpha-helical proteins: a single stack of eight proteins, two stacks of four bundled proteins, four stacks of two bundled proteins, and double stacks of two bundled proteins. Their molecular models replicate realistic molecular behavior, including hydrogen bond formation in the coiled spring-like alpha-helical proteins. “The traditional way of designing materials is to consider properties at the macro level, but a more efficient way of materials’ design is to play with the structural makeup at the nanoscale,” said Buehler, the Esther and Harold E. Assistant Professor in the Department of Civil and Environmental Engineering. “This provides a new paradigm in engineering that enables us to design a new class of materials.” More and more frequently, natural protein materials are being used as inspiration for the design of synthetic materials that are based on nanowires and carbon nanotubes, which can be made to be much stronger than biological materials. Buehler and Ackbarow's work demonstrates that by rearranging the same number of nanoscale elements into hierarchies, the performance of a material can be radically changed. This could eliminate the need to invent new materials for different applications. In a follow-up study, Buehler and CEE graduate students Zhao Qin and Steve Cranford ran similar tests using more than 16,000 elements instead of eight. They found that 98 percent of the randomly arranged rope-like structures did not meet the optimal performance level of the self-assembled natural molecules, which made up the other 2 percent of the structures. The most successful of those again utilized the bundles of four alpha-helical proteins. That analysis shows that random arrangements of elements typically lead to inferior performance, and may explain why many engineered materials are not yet capable of combining disparate properties such as robustness and strength. “Only a few specific nanostructured arrangements provide the basis for optimal material performance, and this must be incorporated in the material design process,” said Buehler. This work is funded by the Army Research Office, a National Science Foundation CAREER Award, and the Air Force Office of Scientific Research. Ackbarow, a graduate student at the Max Planck Institute of Colloids and Interfaces in Potsdam, Germany, was supported in this work by the German National Academic Foundation, the Hamburg Foundation for research studies abroad and the Dr. Juergen Ulderup Foundation.
<urn:uuid:6e315456-1bc2-4947-a6f8-21a5e59a9fc8>
CC-MAIN-2016-26
http://cee.mit.edu/news/newsreleases/2009/stacks_and_bundles
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928293
751
2.765625
3
One day when Nuraini was taking a shower, she felt a hard lump on her left breast. It was like a tiny, moving ball trying to break out from below the skin. She went to visit a doctor at a nearby Puskesmas (community health center) and was told the lump was nothing to worry about. Luckily her husband insisted she get a second opinion at a hospital. "It turned out it was a benign tumor at stage one," Nuraini said. Breast cancer usually develops in stages, from stage one to stage four. A month after the tumor was diagnosed, Nuraini underwent an operation, which was followed by radiation and chemotherapy. Now she is in remission from the illness that could have killed her. However, Nuraini was lucky. Many women suffering from the symptoms of breast cancer delay paying a visit to the doctor. "About 70 percent of people diagnosed with breast cancer are already at stage three or four, which is usually considered too late for medical treatment to be effective," oncologist and surgeon Sonar S. Panigoro from Cipto Mangunkusumo Hospital said. Breast cancer occurs when cancer cells attack glandular breast tissue. Most cases of this type of cancer are found on the upper part of the breast closest to the arm. Breast cancer can spread by way of the lymphatic system or blood stream to the lungs, liver, bones or other organs, or can spread directly to the skin. It can also occur in men, although cases are very rare. In Indonesia, only one man diagnosed with breast cancer died in 2006. Breast cancer is the world's fifth most common cause of cancer-related death, after lung cancer, stomach cancer, liver cancer and colon cancer. Breast cancer resulted in 502,000 deaths (7 percent of cancer-related deaths and almost 1 percent of all deaths) worldwide in 2005. "Here it is estimated that between 18 to 20 percent of women may be diagnosed with breast cancer. It ranks second after cervical cancer," said Sonar. Sonar said when breast cancer is at stage one or two, operations can be performed, followed by a combination of radiation therapy, chemotherapy and hormone therapy. "However, if breast cancer is at stage three or four, the adjuvant therapies are pursued first before an operation is attempted. But in many cases, it is too late for an operation," he said. Depending on each patent's age and the type of cancer they have, cancer cases are divided into various categories from high risk to low risk. Each category of cancer is treated differently. Treatment possibilities include radiation therapy, chemotherapy, hormone therapy and immune therapy. Early detection is the best way to deal with breast cancer. However, in many cases slow-growing breast tumors may not be detectable by touch for up to eight years. Women can examine their own breasts regularly by pressing each breast firmly and carefully using three fingers. It is best to do this one week after menstruation. However, it is more reliable to seek a mammogram (x-ray), USG (ultrasonography) or advanced MRI (magnetic resonance imaging) to check for breast cancer. With technology improving rapidly, breast cancer cases are increasingly being detected early before any symptoms are present. "The mammography is recommended for women over 40, while the other early detection methods are best for women under 40," said Sonar. While the cause of breast cancer remains to a large extent unknown, many risk factors have been recognized. These include gender, age, hormones, a high-fat diet, alcohol intake, obesity and environmental factors such as tobacco consumption and radiation. Psychological aspects should also be taken seriously as not all breast cancer patients cope with their illness in the same way. Many larger hospitals are affiliated with cancer support groups, which help patients cope with the issues they may face in a supportive environment. In Indonesia, the Reach to Recovery support group was formed in 1997 by the Indonesian Cancer Foundation (YKI). The support group is made up of breast cancer survivors who voluntarily provide counseling to people diagnosed with breast cancer. "The volunteers ensure patients that medical treatment is the best way to treat their illness. In many cases, patients listen to the volunteers more than their doctors," said program director Rabecca N. Angka, who also works at the YKI's Early Diagnostic Center in Lebak Bulus, South Jakarta. However, she said temptation among breast cancer patients to try alternative treatments remains high. Sonar said many breast cancer patients try alternative treatments before seeking medical advice because of what they see on television. "They say traditional healers can transfer the disease to an animal. Sometimes patients even come to believe that breast cancer is the result of black magic," he said. --Alpha Amirrachman
<urn:uuid:12f0b1ab-1700-4a1f-a5db-152313b73fa2>
CC-MAIN-2016-26
http://cempaka-health.blogspot.com/2008/01/early-detection-key-to-successful.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971494
995
2.890625
3
I once found a Mandarin Chinese dictionary which for each character listed the ancient, the traditional and the simplified Chinese characters. For each it described its meaning and how it developed into the current form. I cannot find it or anything similar anymore. The most I can find are sites which explain radicals of each character but not actual ancient symbols and variations. For instance, the word "eye" was shown as a pictogram resembling an eye, then into the traditional and then into 目 explaining "a simplified eye turned 90°". I know not all characters are made this way but again, maybe it was a dictionary of radicals but it was very good.
<urn:uuid:198039d9-06db-4be6-9544-a9abae562ec4>
CC-MAIN-2016-26
http://chinese.stackexchange.com/questions/1838/what-was-this-online-dictionarys-name-with-ancient-traditional-and-simplified-c/2629
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977142
132
2.671875
3
On January 3, 2008, the Center for History and New Media and the History Education Group at Stanford University were awarded the American Historical Association’s James Harvey Robinson Prize for Historical Thinking Matters <historicalthinkingmatters.org>. The biennial prize is awarded for the teaching aid that had made the most outstanding contribution to the teaching and learning of history in any field for public or educational purposes. Historical Thinking Matters is designed to teach students how to “think historically” by critically reading primary sources and participating in authentic inquiries about key topics in U.S. history. Sharon Leon, Director of Public Projects at CHNM, was joined by Sam Wineburg and Daisy Martin from the History Education Group to accept the award from AHA President Gabrielle M. Spiegel at the General Meeting in New York City. This is third CHNM project to receive the Robinson Prize. Archives by Year: Since 1994, the Center for History and New Media at George Mason University has used digital media and computer technology to democratize history—to incorporate multiple voices, reach diverse audiences, and encourage popular participation in presenting and preserving the past. We sponsor more than two dozen digital history projects and offer free tools and resources for historians. Learn More Teachinghistory.org is the central online location for accessing high-quality resources in K-12 U.S. history education. Explore the highlighted content on our homepage or visit individual sections for additional materials.
<urn:uuid:bd9d0646-66dd-413a-8274-ae9a3d594ab5>
CC-MAIN-2016-26
http://chnm.gmu.edu/tag/robinson-prize/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931286
291
2.734375
3
Amateurish historians often tell us that we must study the past to avoid repeating its mistakes. Such efforts rarely work out well. Consider the determination of the U.S. in the 1930s to avoid replicating the events of World War I or subsequent efforts to oppose communism in places like Vietnam to avoid repeating the appeasement of the 1930s. Laurie Maffly-Kipp, by contrast, offers an unusual, complex and thoughtful approach to history. With remarkable erudition and careful analysis, she presents the lives and ideas of 19th-century African-American writers, preachers and missionaries who thought long and hard about their past.
<urn:uuid:429c0c30-eb14-49d2-8953-a7996faa93e2>
CC-MAIN-2016-26
http://christiancentury.org/reviews/2010-08/becoming-african-american
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962747
134
2.765625
3
Learn German Animal Words and Phrases This page should act as a good reference for those who are looking for a place to build on their basic German vocabulary - especially when it comes to learning about German animal words and phrases, this is a great place to start. Now everyone can learn how to say the animal names in German with our German to English animals dictionary! On this page there are several animal words that have been translated from German to English. There's also a couple of translated sentences which contain animal references, listed below the list of animal names. Beyond that, you'll find our comment section, a list of articles about Germany, and our current German Web Poll. If you know the translations for any other animal words or related phrases, feel free to use the submit form at the bottom of the page. Everything on this site was submitted by our visitors, simply sharing some of what they know. Don't know German, but know another language instead? Feel free to submit translations in that language instead. All submissions will be posted to the site in a future update. To be notified of these future updates, there's a Twitter follow button also located at the bottom of the page where site updates are listed. German to English Animals: Whether you are taking online German classes, doing distance learning German, or studying German abroad in Europe, our German to English dictionary is always being updated with new German animal translations which makes it an excellent supplement for those trying to learn how to speak German. (Words in bold added during the last update.) German to English General Animal Words: das Tier - Animal Names of the Animals in German: der Bär - Bear der Elefant - Elephant der Fisch - Fish das Huhn - Chicken der Hund - Dog das Hündchen - Puppy das Kaninchen - Rabbit das Kätzchen - Kitten die Katze - Cat die Kuh - Cow das Pferd - Horse das Schwein - Pig das Stinktier - Skunk der Tiger - Tiger der Vogel - Bird German Sentences with Animal References: Beruhren Sie die Huhner nicht! - Do not touch the chickens! Leute sind gewohnlich dumme Schweine - People are generally pigs Leave a Comment If you found any errors in our translations, let us know in the comments below! Looking for a specific translation relating to the theme of this page? Free free to ask for one in the comment section as well. Otherwise, just let us know what's on your mind! Continue Learning with These Resources Articles about Germany: Vote in Our Web Poll Did You Know? German Language Trivia According to Google Trends, the countries most interested in learning German are: - Ireland (Dublin) - India (Karnataka, Tamil Nadu, Maharashtra, Delhi) - Singapore (Singapore) - Egypt (Cairo Governorate) - Australia (Queensland, Victoria, New South Wales) - New Zealand - United Kingdom (Scotland, England) - United Arab Emirates - South Africa *Specific sub-regions only listed if they were available. The Fine Print The German to English Dictionary featured at the Chromlea Language Tutor may contain some errors in the German language. It is to be seen with a grain of salt, as all the content is from actual user submissions and not checked for grammatical / spelling accuracy (though we do correct our content as we are informed of errors). We wish you the best with your learning German and hope you can find this site helpful in your German language study.
<urn:uuid:142d766d-8aa5-4a26-ab20-36c879f81342>
CC-MAIN-2016-26
http://chromlea.com/german/animals.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.893448
773
2.59375
3
Use, Testing and Treatment (benzoylmethyl ecgonine) is a crystalline tropane alkaloid that is obtained from the leaves of the coca plant. The name comes from "coca" in addition to the alkaloid suffix -ine, forming cocaine. It is both a stimulant of the central nervous system and an appetite suppressant. Specifically, it is a dopamine reuptake inhibitor, a norepinephrine reuptake inhibitor and a serotonin reuptake inhibitor which mediates functionality of such as an exogenous Because of the way it affects the mesolimbic reward pathway, cocaine is Signs of Cocaine Addiction - Frequently Asked Questions About Cocaine cultivation, and distribution are illegal for non-medicinal and non-government sanctioned purposes in virtually all parts of the world. Although its free commercialization is illegal and has been severely penalized in virtually all countries, its use worldwide remains widespread in many social, cultural, and For over a thousand years South American indigenous peoples have chewed the coca leaf (Erythroxylon coca), a plant that contains vital nutrients as well as numerous alkaloids, including cocaine. The leaf was, and is, chewed almost universally by some indigenous communities—ancient Peruvian mummies have been found with the remains of coca leaves, and pottery from the time period depicts humans, cheeks bulged with the presence of something on which they are chewing. There is also evidence that these cultures used a mixture of coca leaves and saliva as an anesthetic for the performance of trepanation. The coca plant, Spaniards conquered South America, they at first ignored aboriginal claims that the leaf gave them strength and energy, and declared the practice of chewing it the work of the Devil. But after discovering that these claims were true, they legalized and taxed the leaf, taking 10% off the value of each crop. In 1569, Nicolás Monardes described the practice of the natives of chewing a mixture of tobacco and coca leaves to induce "great contentment": In 1609, Padre Blas Valera wrote: “Coca protects the body from many ailments, and our doctors use it in powdered form to reduce the swelling of wounds, to strengthen broken bones, to expel cold from the body or prevent it from entering, and to cure rotten wounds or sores that are full of maggots. And if it does so much for outward ailments, will not its singular virtue have even greater effect in the entrails of those who eat it?” stimulant and hunger-suppressant properties of coca had been known for many centuries, the isolation of the cocaine alkaloid was not achieved until 1855 . Many scientists had attempted to isolate cocaine, but none had been successful for two reasons: the knowledge of chemistry required was insufficient at the time, and the cocaine was worsened because coca does not grow in Europe and ruins easily during travel. alkaloid was first isolated by the German chemist Friedrich Gaedcke in 1855. Gaedcke named the alkaloid "erythroxyline", and published a description in the journal Archiv der Pharmazie. Friedrich Wöhler asked Dr. Carl Scherzer, a scientist aboard the Novara (an Austrian frigate sent by Emperor Franz Joseph to circle the globe), to bring him a large amount of coca leaves from South America. In 1859, the ship finished its travels and Wöhler received a trunk full of coca. Wöhler passed on the leaves to Albert Niemann, a Ph.D. student at the University of Göttingen in Germany, who then developed an improved purification process. described every step he took to isolate cocaine in his dissertation titled Über eine neue organische Base in den Cocablättern (On a New Organic Base in the Coca Leaves), which was published in 1860—it earned him his Ph.D. and is now in the British Library. He wrote of the alkaloid's “colourless transparent prisms” and said that, “Its solutions have an alkaline reaction, a bitter taste, promote the flow of saliva and leave a peculiar numbness, followed by a sense of cold when applied to the tongue.” Niemann named the alkaloid “cocaine”—as with other alkaloids its name carried the “-ine” suffix (from Latin synthesis and elucidation of the structure of the cocaine molecule was by Richard Willstätter in 1898. The synthesis started from tropinone, a related natural product and took five steps. discovery of this new alkaloid, Western medicine was quick to exploit the possible uses of this plant. In 1879, Vassili von Anrep, of the University of Würzburg, devised an experiment to demonstrate the analgesic properties of the newly-discovered alkaloid. He prepared two separate jars, one containing a cocaine-salt solution, with the other containing merely salt water. He then submerged a frog's legs into the two jars, one leg in the treatment and one in the control solution, and proceeded to stimulate the legs in several different ways. The leg that had been immersed in the cocaine solution reacted very differently than the leg that had been immersed in salt Carl Koller (a close associate of Sigmund Freud, who would write about cocaine later) experimented with cocaine for ophthalmic usage. In an infamous experiment in 1884, he experimented upon himself by applying a cocaine solution to his own eye and then pricking it with pins. His findings were presented to the Heidelberg Ophthalmological Society. Also in 1884, Jellinek demonstrated the effects of cocaine as a respiratory system anesthetic. In 1885, William Halsted demonstrated nerve-block anesthesia, and James Corning demonstrated peridural anesthesia. 1898 saw Heinrich Quincke use cocaine for In 1859, an Italian doctor, Paolo Mantegazza, returned from Peru, where he had witnessed first-hand the use of coca by the natives. He proceeded to experiment on himself and upon his return to Milan he wrote a paper in which he described the effects. In this paper he declared coca and cocaine (at the time they were assumed to be the same) as being useful medicinally, in the treatment of “a furred tongue in the morning, flatulence, [and] whitening of the teeth.” Pope Leo XIII purportedly carried a hipflask of Vin Mariani with him, and awarded a Vatican gold medal to Angelo Mariani. A chemist named Angelo Mariani who read Mantegazza’s paper became immediately intrigued with coca and its economic potential. In 1863, Mariani started marketing a wine called Vin Mariani, which had been treated with coca leaves, to become cocawine. The ethanol in wine acted as a solvent and extracted the cocaine from the coca leaves, altering the drink’s effect. It contained 6 mg cocaine per ounce of wine, but Vin Mariani which was to be exported contained 7.2 mg per ounce, to compete with the higher cocaine content of similar drinks in the United States. A “pinch of coca leaves” was included in John Styth Pemberton's original 1886 recipe for Coca-Cola, though the company began using decocainized leaves in 1906 when the Pure Food and Drug Act was passed. The actual amount of cocaine that Coca-Cola contained during the first twenty years of its production is practically impossible to determine. In 1879 cocaine began to be used to treat morphine addiction. Cocaine was introduced into clinical use as a local anaesthetic in Germany in 1884, about the same time as Sigmund Freud published his work Über Coca, in which he wrote that cocaine causes: “...exhilaration and lasting euphoria, which in no way differs from the normal euphoria of the healthy person...You perceive an increase of self-control and possess more vitality and capacity for work....In other words, you are simply normal, and it is soon hard to believe you are under the influence of any drug....Long intensive physical work is performed without any fatigue...This result is enjoyed without any of the unpleasant after-effects that follow exhilaration brought about by alcohol....Absolutely no craving for the further use of cocaine appears after the first, or even after repeated taking of the drug...” In 1885 the U.S. manufacturer Parke-Davis sold cocaine in various forms, including cigarettes, powder, and even a cocaine mixture that could be injected directly into the user’s veins with the included needle. The company promised that its cocaine products would “supply the place of food, make the coward brave, the silent eloquent and ... render the sufferer insensitive to pain.” By the late Victorian era cocaine use had appeared as a vice in literature, for example it was injected by Arthur Conan Doyle’s fictional Sherlock Holmes. 20th-century Memphis, Tennessee, cocaine was sold in neighborhood drugstores on Beale Street, costing five or ten cents for a small boxful. Stevedores along the Mississippi River used the drug as a stimulant, and white employers encouraged its use by black laborers. In 1909, Ernest Shackleton took “Forced March” brand cocaine tablets to Antarctica, as did Captain Scott a year later on his ill-fated journey to the South Pole. By the turn of the twentieth century, the addictive properties of cocaine had become clear, and the problem of cocaine abuse began to capture public attention in the United States. The dangers of cocaine abuse became part of a moral panic that was tied to the dominant racial and social anxieties of the day. In 1903, the American Journal of Pharmacy stressed that most cocaine abusers were “bohemians, gamblers, high- and low-class prostitutes, night porters, bell boys, burglars, racketeers, pimps, and casual laborers.” In 1914, Dr. Christopher Koch of Pennsylvania’s State Pharmacy Board made the racial innuendo explicit, testifying that, “Most of the attacks upon the white women of the South are the direct result of a cocaine-crazed Negro brain.” Mass media manufactured an epidemic of cocaine use among African Americans in the Southern United States to play upon racial prejudices of the era, though there is little evidence that such an epidemic actually took place. In the same year, the Harrison Narcotics Tax Act outlawed the sale and distribution of cocaine in the United States. This law incorrectly referred to cocaine as a narcotic, and the misclassification passed into popular culture. As stated above, cocaine is a stimulant, not a narcotic. Although technically illegal for purposes of distribution and use, the distribution, sale and use of cocaine was still legal for registered companies and individuals. Because of the misclassification of cocaine as a narcotic, the debate is still open on whether the government actually enforced these laws strictly. Cocaine was not considered a controlled substance until 1970, when the United States listed it as such in the Controlled Substances Act. Until that point, the use of cocaine was open and rarely prosecuted in the US due to the moral and physical debates commonly discussed. countries, cocaine is a popular recreational drug. In the United States, the development of "crack" cocaine introduced the substance to a generally poorer inner-city market. Use of the powder form has stayed relatively constant, experiencing a new height of use during the late 1990s and early 2000s in the U.S., and has become much more popular in the last few years in the UK. Cocaine use is prevalent across all socioeconomic strata, including age, demographics, economic, social, political, religious, and livelihood. U.S. cocaine market exceeded $70 billion in street value for the year 2005, exceeding revenues by corporations such as Starbucks . There is a tremendous demand for cocaine in the U.S. market, particularly among those who are making incomes affording luxury spending, such as single adults and professionals with discretionary income. Cocaine’s status as a club drug shows its immense popularity among the “party crowd”. In 1995 the World Health Organization (WHO) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) announced in a press release the publication of the results of the largest global study on cocaine use ever undertaken. However, a decision in the World Health Assembly banned the publication of the study. In the sixth meeting of the B committee the US representative threatened that "If WHO activities relating to drugs failed to reinforce proven drug control approaches, funds for the relevant programs should be curtailed". This led to the decision to discontinue publication. A part of the study has been recuperated. Available are profiles of cocaine use in 20 countries. A problem with illegal cocaine use, especially in the higher volumes used to combat fatigue (rather than increase euphoria) by long-term users is the risk of ill effects or damage caused by the compounds used in adulteration. Cutting or "stamping on" the drug is commonplace, using compounds which simulate ingestion effects, such as Novocain (procaine) producing temporary anaesthaesia as many users believe a strong numbing effect is the result of strong and/or pure cocaine, ephedrine or similar stimulants that are to produce an increased heart rate. The normal adulterants for profit are inactive sugars, usually mannitol, creatine or glucose, so introducing active adulterants gives the illusion of purity and to 'stretch' or make it so a dealer can sell more product than without the adulterants. The adulterant of sugars therefore allows the dealer to sell the product for a higher price because of the illusion of purity and allows to sell more of the product at that higher price, enabling dealers to make a lot of revenue with little cost of the adulterants. Cocaine trading carries large penalties in most jurisdictions, so user deception about purity and consequent high profits for dealers are the norm. A pile of A piece of compressed cocaine powder Cocaine in its purest form is a white, pearly product. Cocaine appearing in powder form is a salt, typically cocaine hydrochloride (CAS 53-21-4). Street market cocaine is frequently adulterated or “cut” with various powdery fillers to increase its weight; the substances most commonly used in this process are baking soda; sugars, such as lactose, dextrose, inositol, and mannitol; and local anesthetics, such as lidocaine or benzocaine, which mimic or add to cocaine's numbing effect on mucous membranes. Cocaine may also be "cut" with other stimulants such as methamphetamine. Adulterated cocaine is often a white, off-white or pinkish powder. The color of “crack” cocaine depends upon several factors including the origin of the cocaine used, the method of preparation – with ammonia or baking soda – and the presence of impurities, but will generally range from white to a yellowish cream to a light brown. Its texture will also depend on the adulterants, origin and processing of the powdered cocaine, and the method of converting the base. It ranges from a crumbly texture, sometimes extremely oily, to a hard, almost Forms of cocaine is produced by macerating coca leaves along with water that has been acidulated with sulfuric acid, or an aromatic-based solvent, like kerosene or benzene. This is often accomplished by placing the ingredients into a vat and stomping on them, in a manner similar to the traditional method for crushing grapes. A more popular method in modern times is to form a makeshift "vat" by spreading a heavy nylon tarp on the floor of an enclosed area and shred the leaves with a gas-powered weed trimmer. This method is fast, and not only shreds the leaves, but results in bruising and fragmenting of the remaining pieces, aiding the extraction process. After the maceration is completed, the water is evaporated to yield a pasty mass of impure cocaine sulfate. The sulfate salt itself is an intermediate step to producing cocaine hydrochloride. As the name implies, “freebase” is the base form of cocaine, as opposed to the salt form of cocaine hydrochloride. Whereas cocaine hydrochloride is extremely soluble in water, cocaine base is insoluble in water and is therefore not suitable for drinking, snorting or injecting. Whereas cocaine hydrochloride is not well-suited for smoking because the temperature at which it vaporizes is very high and close to the temperature at which it burns; cocaine base vaporizes at a much lower temperature, which makes it suitable for inhalation. cocaine has the additional effect of releasing methylecgonidine into the user's system due to the pyrolysis of the substance (a side effect which insufflating or injecting powder cocaine does not create). Some research suggests that smoking freebase cocaine can be even more cardiotoxic than other routes of administration because of methylecgonidine's effects on lung tissue and liver tissue. is a popular route of ingestion because the cocaine is absorbed immediately into blood via the lungs, reaching the brain in about five seconds. The rush is much more intense than snorting the same amount of cocaine nasally, but the effects do not last as long. The peak of the freebase rush is over almost as soon as the user exhales the vapor, but the high typically lasts 5–10 minutes afterward. What makes freebasing particularly dangerous is that users typically do not wait that long for their next hit and will continue to smoke freebase until none is left. These effects are similar to those that can be achieved by injecting or “slamming” cocaine hydrochloride, but without the risks associated with intravenous drug use (though there are other serious risks associated with is produced by first dissolving cocaine hydrochloride in water. Once dissolved in water, cocaine hydrochloride (Coc-HCl) dissociates into the protonated cocaine ion (Coc-H+) and the chloride ion (Cl−). Any solids that remain suspended in the solution are impurities from the cut and are removed by filtration. A base, typically ammonia (NH3), is added to the solution. The following net acid-base reaction takes place: Coc-H+ + NH3 → Coc + NH4+ cocaine (Coc) is insoluble in water, it precipitates and the solution becomes cloudy. To recover the freebase in the "traditional" manner, diethyl ether is added to the solution. Since freebase is highly soluble in ether, a vigorous shaking of the mixture results in the freebase being dissolved in the ether. As ether is practically insoluble in water, it can be siphoned off. The ether is then left to evaporate, leaving behind the nearly pure freebase. ether is dangerous because ether is extremely flammable; its vapors are heavier than air and can "creep" from an open bottle, and in the presence of oxygen it can form peroxides, which can spontaneously combust. Comedian Richard Pryor performed a skit poking fun at himself for a 1980 incident in which he caused an explosion and ignited himself attempting to smoke "freebase", presumably while still wet with ether (though his ex-wife Jennifer Lee Pryor said that he poured over his body and torched himself in a drug psychosis). In its creation process, due to the dangers of using ether to produce pure freebase cocaine, cocaine producers began to omit the step of removing the freebase cocaine precipitate from the ammonia mixture. Typically, filtration processes are also omitted. The end result of this process is that the cut, in addition to the ammonium salt (NH4Cl), remains in the freebase cocaine after the mixture is evaporated. The “rock” that is thus formed also contains a small amount of water. Sodium bicarbonate (baking soda) is also preferred in preparing the freebase, for when commonly "cooked" the ratio is 50/50 to 40/60% cocaine/bicarbonate. This acts as a filler which extends the overall profitability of illicit sales. Crack cocaine may be reprocessed in small quantities with water (users refer to the resultant product as "cookback"). This removes the residual bicarbonate, and any adulterants or cuts that have been used in the previous handling of the cocaine and leaves a relatively pure, anhydrous cocaine base. When the rock is heated, this water boils, making a crackling sound (hence the onomatopoeic “crack”). Baking soda is now most often used as a base rather than ammonia for reasons of lowered stench and toxicity; however, any weak base can be used to make crack cocaine. Strong bases, such as sodium hydroxide, tend to hydrolyze some of the cocaine into non-psychoactive ecgonine. infusion (also referred to as Coca tea) is used in coca-leaf producing countries much as any herbal medicinal infusion would elsewhere in the world. The free and legal commercialization of dried coca leaves under the form of filtration bags to be used as "coca tea" has been actively promoted by the governments of Peru and Bolivia for many years as a drink having medicinal powers. Visitors to the city of Cuzco in Peru, and La Paz in Bolivia are greeted with the offering of coca leaf infusions (prepared in tea pots with whole coca leaves) purportedly to help the newly-arrived traveler overcome the malaise of high altitude sickness. The effects of drinking coca tea are a mild stimulation and mood lift. It does not produce any significant numbing of the mouth nor does it give a rush like snorting cocaine. In order to prevent the demonization of this product, its promoters publicize the unproven concept that much of the effect of the ingestion of coca leaf infusion would come from the secondary alkaloids, as being not only quantitatively different from pure cocaine but also qualitatively It has been promoted as an adjuvant for the treatment of cocaine dependence. In one controversial study, coca leaf infusion was used -in addition to counseling- to treat 23 addicted coca-paste smokers in Lima, Peru. Relapses fell from an average of four times per month before treatment with coca tea to one during the treatment. The duration of abstinence increased from an average of 32 days prior to treatment to 217 days during treatment. These results suggest that the administration of coca leaf infusion plus counseling would be an effective method for preventing relapse during treatment for cocaine addiction. Importantly, these results also suggest strongly that the primary pharmacologically active metabolite in coca leaf infusions is actually cocaine and not the secondary alkaloids. metabolite benzoylecgonine can be detected in the urine of people a few hours after drinking one cup of coca leaf infusion. Buy In Home Cocaine Many users rub the powder along the gum line, or onto a cigarette filter which is then smoked (called a "hoolie"), which numbs the gums and teeth - hence the colloquial names of "numbies", "gummers" or "cocoa puffs" for this type of administration. This is mostly done with the small amounts of cocaine remaining on a surface after insufflation. Another oral method is to wrap up some cocaine in rolling paper and swallow it. This is sometimes called a "snow bomb." Coca leaves are typically mixed with an alkaline substance (such as lime) and chewed into a wad that is retained in the mouth between gum and cheek (much in the same as chewing tobacco is chewed) and sucked of its juices. The juices are absorbed slowly by the mucous membrane of the inner cheek and by the gastrointestinal tract when swallowed. Alternatively, coca leaves can be infused in liquid and consumed like tea. Ingesting coca leaves generally is an inefficient means of administering cocaine. Advocates of the consumption of the coca leaf state that coca leaf consumption should not be criminalized as it is not actual cocaine, and consequently it is not properly the illicit drug. Because cocaine is hydrolyzed and rendered inactive in the acidic stomach, it is not readily absorbed when ingested alone. Only when mixed with a highly alkaline substance (such as lime) can it be absorbed into the bloodstream through the stomach. The efficiency of absorption of orally administered cocaine is limited by two additional factors. First, the drug is partly catabolized by the liver. Second, capillaries in the mouth and esophagus constrict after contact with the drug, reducing the surface area over which the drug can be absorbed. Nevertheless, cocaine metabolites can be detected in the urine of subjects that have sipped even one cup of coca leaf infusion. Therefore, this is an actual additional form of administration of cocaine, albeit an inefficient one. administered cocaine takes approximately 30 minutes to enter the bloodstream. Typically, only a third of an oral dose is absorbed, although absorption has been shown to reach 60% in controlled settings. Given the slow rate of absorption, maximum physiological and psychotropic effects are attained approximately 60 minutes after cocaine is administered by ingestion. While the onset of these effects is slow, the effects are sustained for approximately 60 minutes after their peak is attained. popular belief, both ingestion and insufflation result in approximately the same proportion of the drug being absorbed: 30 to 60%. Compared to ingestion, the faster absorption of insufflated cocaine results in quicker attainment of maximum drug effects. Snorting cocaine produces maximum physiological effects within 40 minutes and maximum psychotropic effects within 20 minutes, however, a more realistic activation period is closer to 5 to 10 minutes, which is similar to ingestion of cocaine. Physiological and psychotropic effects from nasally insufflated cocaine are sustained for approximately 40 - 60 minutes after the peak effects are attained. Mate de coca or coca-leaf infusion is also a traditional method of consumption and is often recommended in coca producing countries, like Peru and Bolivia, to ameliorate some symptoms of altitude sickness. This method of consumption has been practiced for many centuries by the native tribes of South America. One specific purpose of ancient coca leaf consumption was to increase energy and reduce fatigue in messengers who made multi-day quests to other settlements. In 1986 an article in the Journal of the American Medical Association revealed that U.S. health food stores were selling dried coca leaves to be prepared as an infusion as “Health Inca Tea.” While the packaging claimed it had been “decocainized,” no such process had actually taken place. The article stated that drinking two cups of the tea per day gave a mild stimulation, increased heart rate, and elevation, and the tea was essentially harmless. Despite this, the DEA seized several shipments in Hawaii, Chicago, Illinois, Georgia, and several locations on the East Coast of the United States, and the product was removed from the A man snorting cocaine with a rolled up dollar bill, 2007. (known colloquially as "snorting," "sniffing," or "blowing") is the most common method of ingestion of recreational powdered cocaine in the Western world. The drug coats and is absorbed through the mucous membranes lining the sinuses. When insufflating cocaine, absorption through the nasal membranes is approximately 30–60%, with higher doses leading to increased absorption efficiency. Any material not directly absorbed through the mucous membranes is collected in mucus and swallowed (this "drip" is considered pleasant by some and unpleasant by others). In a study of cocaine users, the average time taken to reach peak subjective effects was 14.6 minutes. Any damage to the inside of the nose is because cocaine highly constricts blood vessels – and therefore blood and oxygen/nutrient flow – to that area. insufflation, cocaine powder must be divided into very fine particles. Cocaine of high purity breaks into fine dust very easily, except when it is moist (not well stored) and forms "chunks," which reduces the efficiency of nasal banknotes, hollowed-out pens, cut straws, pointed ends of keys, specialized spoons, long fingernails, and (clean) tampon applicators are often used to insufflate cocaine. Such devices are often called "tooters" by users. The cocaine typically is poured onto a flat, hard surface (such as a mirror, CD case or book) and divided into "bumps", "lines" or "rails", and then insufflated. As tolerance builds rapidly in the short-term (hours), many lines are often snorted to produce greater effects. A study by Bonkovsky and Mehta reported that, just like shared needles, the sharing of straws used to "snort" cocaine can spread blood diseases such as In the United States, as far back as 1992 many of the people sentenced by federal authorities for charges related to powder cocaine were Hispanic; more Hispanics than non-Hispanic White and non-Hispanic Black people received sentences for crimes related to powder cocaine. provides the highest blood levels of drug in the shortest amount of time. Subjective effects not commonly shared with other methods of administration include a ringing in the ears moments after injection (usually when in excess of 120 milligrams) lasting 2 to 5 minutes including tinnitus & audio distortion. This is colloquially referred to as a "bell ringer". In a study of cocaine users, the average time taken to reach peak subjective effects was 3.1 minutes. The euphoria passes quickly. Aside from the toxic effects of cocaine, there is also danger of circulatory emboli from the insoluble substances that may be used to cut the drug. As with all injected illicit substances, there is a risk of the user contracting blood-borne infections if sterile injecting equipment is not available or used. mixture of cocaine and heroin, known as “speedball” is a particularly popular and dangerous combination, as the converse effects of the drugs actually complement each other, but may also mask the symptoms of an overdose. It has been responsible for numerous deaths, including celebrities such as John Belushi, Chris Farley, Mitch Hedberg, River Phoenix and Layne Staley. cocaine injections can be delivered to animals such as fruit flies to study the mechanisms of cocaine addiction. or crack cocaine is most often accomplished using a pipe made from a small glass tube, often taken from "Love roses," small glass tubes with a paper rose that are promoted as romantic gifts. These are sometimes called "stems", "horns", "blasters" and "straight shooters". A small piece of clean heavy copper or occasionally stainless steel scouring pad – often called a "brillo" (actual Brillo pads contain soap, and are not used), or "chore", named for Chore Boy brand copper scouring pads, – serves as a reduction base and flow modulator in which the "rock" can be melted and boiled to vapor. In a pinch, crack smokers sometimes smoke though a soda can with small holes in the bottom instead of a crack pipe. Also, the bottoms of small glass liquor bottles can be removed, and the bottles neck can then be stuffed with chore to use as a makeshift Crack is smoked by placing it at the end of the pipe; a flame held close to it produces vapor, which is then inhaled by the smoker. The effects, felt almost immediately after smoking, are very intense and do not last long – usually five to fifteen minutes. In a study performed on crack cocaine users, the average time taken for them to reach their peak subjective "high" was 1.4 minutes. Most (especially frequent) users crave more immediately after the peak. "Crack houses" depend on these cravings by providing a place for smoking crack to its users, and a ready supply of small bags for sale. cocaine is sometimes combined with other drugs, such as cannabis, often rolled into a joint or blunt. Powdered cocaine is also sometimes smoked, though heat destroys much of the chemical; smokers often sprinkle it on marijuana. referring to paraphernalia and practices of smoking cocaine vary, as do the packaging methods in the street level sale. between cocaine & amphetamine with regard to DAT1 receptor reuptake blocking. Cocaine binds directly to the DAT1 transporter, whereas amphetamines phosphorylate and invert the transporter causing it to internalize. pharmacodynamics of cocaine involve the complex relationships of neurotransmitters (inhibiting monoamine uptake in rats with ratios of about: = 2:3, serotonin:norepinephrine = 2:5 ) The most extensively studied effect of cocaine on the central nervous system is the blockade of the dopamine transporter protein. Dopamine transmitter released during neural signaling is normally recycled via the transporter; i.e., the transporter binds the transmitter and pumps it out of the synaptic cleft back into the presynaptic neuron, where it is taken up into storage vesicles. Cocaine binds tightly at the dopamine transporter forming a complex that blocks the transporter's function. The dopamine transporter can no longer perform its reuptake function, and thus dopamine accumulates in the synaptic cleft. This results in an enhanced and prolonged postsynaptic effect of dopaminergic signaling at dopamine receptors on the receiving neuron. Prolonged exposure to cocaine, as occurs with habitual use, leads to homeostatic dysregulation of normal (i.e. without cocaine) dopaminergic signaling via down-regulation of dopamine receptors and enhanced signal transduction. The decreased dopaminergic signaling after chronic cocaine use may contribute to depressive mood disorders and sensitize this important brain reward circuit to the reinforcing effects of cocaine (e.g. enhanced dopaminergic signalling only when cocaine is self-administered). This sensitization contributes to the intractable nature of addiction and relapse. brain regions such as the ventral tegmental area, nucleus accumbens, and prefrontal cortex are frequent targets of cocaine addiction research. Of particular interest is the pathway consisting of dopaminergic neurons originating in the ventral tegmental area that terminate in the nucleus accumbens. This projection may function as a "reward center", in that it seems to show activation in response to drugs of abuse like cocaine in addition to natural rewards like food or sex. While the precise role of dopamine in the subjective experience of reward is highly controversial among neuroscientists, the release of dopamine in the nucleus accumbens is widely considered to be at least partially responsible for cocaine's rewarding effects. This hypothesis is largely based on laboratory data involving rats that are trained to self-administer cocaine. If dopamine antagonists are infused directly into the nucleus accumbens, well-trained rats self-administering cocaine will undergo extinction (i.e. initially increase responding only to stop completely) thereby indicating that cocaine is no longer reinforcing (i.e. rewarding) the effects on serotonin (5-hydroxytryptamine, 5-HT) show across multiple serotonin receptors, and is shown to inhibit the re-uptake of 5-HT3 specifically as an important contributor to the effects of cocaine. The overabundance of 5-HT3 receptors in cocaine conditioned rats display this trait, however the exact effect of 5-HT3 in this process is unclear. The 5-HT2 receptor (particularly the subtypes 5-HT2AR, 5-HT2BR and 5-HT2CR) show influence in the evocation of hyperactivity displayed in cocaine use. In addition to the mechanism shown on the above chart, cocaine has been demonstrated to bind as to directly stabilize the DAT transporter on the open outward-facing conformation whereas other stimulants (namely phenethylamines) stabilize the closed conformation. Further, cocaine binds in such a way as to inhibit a hydrogen bond innate to DAT that otherwise still forms when amphetamine and similar molecules are bound. Cocaine's binding properties are such that it attaches so this hydrogen bond will not form and is blocked from formation due to the tightly locked orientation of the cocaine molecule. Research studies have suggested that the affinity for the transporter is not what is involved in habituation of the substance so much as the conformation and binding properties to where & how on the transporter the molecule binds. are effected by cocaine, as cocaine functions as a sigma ligand agonist. Further specific receptors it has been demonstrated to function on are NMDA and the D1 dopamine receptor. blocks sodium channels, thereby interfering with the propagation of action potentials; thus, like lignocaine and novocaine, it acts as a local anesthetic. Cocaine also causes vasoconstriction, thus reducing bleeding during minor surgical procedures. The locomotor enhancing properties of cocaine may be attributable to its enhancement of dopaminergic transmission from the substantia nigra. Recent research points to an important role of circadian mechanisms and clock genes in behavioral actions of cocaine. increases the levels of dopamine in the brain, many cocaine users find that consumption of tobacco products during cocaine use enhances the euphoria. This, however, may have undesirable consequences, such as uncontrollable chain smoking during cocaine use (even users who do not normally smoke cigarettes have been known to chain smoke when using cocaine), in addition to the detrimental health effects and the additional strain on the cardiovascular system caused by In addition to irritability, mood disturbances, restlessness, paranoia, and auditory hallucinations, cocaine use can cause several dangerous physical conditions. It can lead to disturbances in heart rhythm and heart attacks, as well as chest pains or even respiratory failure. In addition, strokes, seizures and headaches are common in heavy users. often cause reduced food intake, many chronic users lose their appetite and can experience severe malnutrition and significant weight loss. Cocaine effects, further, are shown to be potentiated for the user when used in conjunction with new surroundings and stimuli, and otherwise novel environs. extensively metabolized, primarily in the liver, with only about 1% excreted unchanged in the urine. The metabolism is dominated by hydrolytic ester cleavage, so the eliminated metabolites consist mostly of benzoylecgonine (BE), the major metabolite, and other significant metabolites in lesser amounts such as ecgonine methyl ester (EME) and ecgonine. Further minor metabolites of cocaine include norcocaine, p-hydroxycocaine, m-hydroxycocaine, p-hydroxybenzoylecgonine (pOHBE), and m-hydroxybenzoylecgonine. These do not include metabolites created beyond the standard metabolism of the drug in the human body, like for example by the process of pyrolysis such as is the case with liver and kidney function, cocaine metabolites are detectable in urine. Benzoylecgonine can be detected in urine within four hours after cocaine intake and remains detectable in concentrations greater than 150 ng/ml typically for up to eight days after cocaine is used. Detection of accumulation of cocaine metabolites in hair is possible in regular users until the sections of hair grown during use are cut or fall out. If consumed with alcohol, cocaine combines with alcohol in the liver to form cocaethylene. Studies have suggested cocaethylene is both more euphorigenic, and has a higher cardiovascular toxicity than cocaine by itself. Cocaine Effects and Cocaine is a potent central nervous system stimulant. Its effects can last from 20 minutes to several hours, depending upon the dosage of cocaine taken, purity, and method of signs of stimulation are hyperactivity, restlessness, increased blood pressure, increased heart rate and euphoria. The euphoria is sometimes followed by feelings of discomfort and depression and a craving to experience the drug again. Sexual interest and pleasure can be amplified. Side effects can include and impotence, which usually increases with frequent usage. or prolonged use, the drug can cause itching, tachycardia, hallucinations, and paranoid delusions. Overdoses cause tachyarrhythmias and a marked elevation of blood pressure. These can be life-threatening, especially if the user has existing cardiac problems. The LD50 of cocaine when administered to mice is 95.1 mg/kg. Toxicity results in seizures, followed by respiratory and circulatory depression of medullar origin. This may lead to death from respiratory failure, stroke, cerebral hemorrhage, or heart-failure. Cocaine is also highly pyrogenic, because the stimulation and increased muscular activity cause greater heat production. Heat loss is inhibited by the intense vasoconstriction. Cocaine-induced hyperthermia may cause muscle cell destruction and myoglobinuria resulting in renal failure. Emergency treatment often consists of administering a benzodiazepine sedation agent, such as diazepam (Valium) to decrease the elevated heart rate and blood pressure. Physical cooling (ice, cold blankets, etc...) and paracetamol (acetaminophen) may be used to treat hyperthermia, while specific treatments are then developed for any further complications. There is no officially approved specific cocaine overdose, and although some drugs such as dexmedetomidine and rimcazole have been found to be useful for treating cocaine overdose in animal studies, no formal human trials have been carried out. In cases where a patient is unable or unwilling to seek medical attention, cocaine overdoses resulting in mild-moderate tachycardia (i.e.: a resting pulse greater than 120 bpm), may be initially treated with 20 mg of orally administered diazepam or equivalent benzodiazepine (eg: 2 mg lorazepam). Acetaminophen and physical cooling may likewise be used to reduce mild hyperthermia (<39 C). However, a history of high blood pressure or cardiac problems puts the patient at high risk of cardiac arrest or stroke, and requires immediate medical treatment. Similarly, if benzodiazepine sedation fails to reduce heart rate or body temperatures fails to lower, professional intervention is necessary. primary acute effect on brain chemistry is to raise the amount of dopamine and serotonin in the nucleus accumbens (the pleasure center in the brain); this effect ceases, due to metabolism of cocaine to inactive compounds and particularly due to the depletion of the transmitter resources (tachyphylaxis). This can be experienced acutely as feelings of depression, as a "crash" after the initial high. Further mechanisms occur in chronic cocaine use. The "crash" is accompanied with muscle spasms throughout the body, also known as the "jitters", muscle weakness, headaches, dizziness, and suicidal thoughts. Not all users will experience these, but most tend to experience some or all of these shown that cocaine usage during pregnancy triggers premature labor and may lead to abruptio placentae. cause coronary artery spasms which lead to a myocardial infarction. This effect can happen randomly to any user. The coronary artery spasms can occur on the user's first usage or any other usage after. The coronary spasms cause the ectopic ventricular foci of the heart to become hypoxic and the extreme irritability can trigger life-threatening ventricular arrhythmias. Main effects of chronic cocaine use. intake causes brain cells to adapt functionally to strong imbalances of transmitter levels in order to compensate extremes. Thus, receptors disappear from the cell surface or reappear on it, resulting more or less in an "off" or "working mode" respectively, or they change their susceptibility for binding partners (ligands) – mechanisms called However, studies suggest cocaine abusers do not show normal age-related loss of striatal DAT sites, suggesting cocaine has neuroprotective properties for dopamine neurons. The experience of insatiable hunger, aches, insomnia/oversleeping, lethargy, and persistent runny nose are often described as very unpleasant. Depression with suicidal ideation may develop in very heavy users. Finally, a loss of vesicular monoamine transporters, neurofilament proteins, and other morphological changes appear to indicate a long term damage of dopamine neurons. All these effects contribute a rise in tolerance thus requiring a larger dosage to achieve the same effect. The lack of normal amounts of serotonin and dopamine in the brain is the cause of the dysphoria and depression felt after the initial high. Physical withdrawal is not dangerous, and is in fact restorative. The diagnostic criteria for cocaine withdrawal are characterized by a dysphoric mood, fatigue, unpleasant dreams, insomnia or hypersomnia, erectile dysfunction, increased appetite, psychomotor retardation or agitation, and anxiety. effects from chronic smoking of cocaine include hemoptysis, bronchospasm, pruritus, fever, diffuse alveolar infiltrates without effusions, pulmonary and systemic eosinophilia, chest pain, lung trauma, sore throat, asthma, hoarse voice, dyspnea (shortness of breath), and an aching, flu-like syndrome. A common but untrue belief is that the smoking of cocaine chemically breaks down tooth enamel and causes tooth decay. However, cocaine does often cause involuntary tooth grinding, known as bruxism, which can deteriorate tooth enamel and lead to intranasal usage can degrade the cartilage separating the nostrils (the septum nasi), leading eventually to its complete disappearance. Due to the absorption of the cocaine from cocaine hydrochloride, the remaining hydrochloride forms a dilute hydrochloric acid. Cocaine may also greatly increase this risk of developing rare autoimmune or connective tissue diseases such as lupus, Goodpasture's disease, vasculitis, glomerulonephritis, Stevens-Johnson syndrome and other diseases. It can also cause a wide array of kidney diseases and renal failure. While these conditions are normally found in chronic use they can also be caused by short term exposure in doubles both the risks of hemorrhagic and ischemic strokes , as well as increases the risk of other infarctions, such as myocardial infarction. Years after the abuse has ended, many ex-abusers report a noticeably reduced attention span. Cocaine as a local historically useful as a topical anesthetic in eye and nasal surgery, although it is now predominantly used for nasal and lacrimal duct surgery. The major disadvantages of this use are cocaine's intense vasoconstrictor activity and potential for cardiovascular toxicity. Cocaine has since been largely replaced in Western medicine by synthetic local anaesthetics such as benzocaine, and tetracaine though it remains available for use if specified. If vasoconstriction is desired for a procedure (as it reduces bleeding), the anesthetic is combined with a vasoconstrictor such as phenylephrine or epinephrine. In Australia it is currently prescribed for use as a local anesthetic for conditions such as mouth and lung ulcers. Some ENT specialists occasionally use cocaine within the practice when performing procedures such as nasal cauterization. In this scenario dissolved cocaine is soaked into a ball of cotton wool, which is placed in the nostril for the 10-15 minutes immediately prior to the procedure, thus performing the dual role of both numbing the area to be cauterized and also vasoconstriction. Even when used this way, some of the used cocaine may be absorbed through oral or nasal mucosa and give systemic researchers from Kyoto University Hospital proposed the use of cocaine in conjunction with phenylephrine administered in the form of an eye drop as a diagnostic test for Parkinson's disease. "cocaine" was made from "coca" + the suffix "-ine"; from its use as a local anaesthetic a suffix "-caine" was extracted and used to form names of synthetic distribution and sale of cocaine products is restricted (and illegal in most contexts) in most countries as regulated by the Single Convention on Narcotic Drugs, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. In the United States the manufacture, importation, possession, and distribution of cocaine is additionally regulated by the 1970 Controlled Substances Act. such as Peru and Bolivia permit the cultivation of coca leaf for traditional consumption by the local indigenous population, but nevertheless prohibit the production, sale and consumption of cocaine. Some parts of Australia allow processed cocaine for medicinal uses only. according to the United Nations, 589 metric tons of cocaine were seized globally by law enforcement authorities. Colombia seized 188 tons, the United States 166 tons, Europe 79 tons, Peru 14 tons, Bolivia 9 tons, and the rest of the world 133 tons. cocaine, a form in which it is commonly transported. Because of the extensive processing it undergoes during preparation, cocaine is generally treated as a 'hard drug', with severe penalties for possession and trafficking. Demand remains high, and consequently black market cocaine is quite expensive. Unprocessed cocaine, such as coca leaves, are occasionally purchased and sold, but this is exceedingly rare as it is much easier and more profitable to conceal and smuggle it in powdered form. The scale of the market is immense: 770 tonnes times $100 per gram retail = up to $77 billion. Colombia is the world's leading producer of cocaine. Due to Colombia's 1994 legalization of small amounts of cocaine for personal use, while sale of cocaine was still prohibited, the result was the spread of local coca crops, partly justified by the local demand. of the world's annual yield of cocaine has been produced in Colombia, both from cocaine base imported from Peru (primarily the Huallaga Valley) and Bolivia, and from locally grown coca. There was a 28% increase from the amount of potentially harvestable coca plants which were grown in Colombia in 1998 . This, combined with crop reductions in Bolivia and Peru, made Colombia the nation with the largest area of coca under cultivation after the mid-1990s. Coca grown for traditional purposes by indigenous communities, a use which is still present and is permitted by Colombian laws, only makes up a small fragment of total coca production, most of which is used for the illegal drug trade. eradicate coca fields through the use of defoliants have devastated part of the farming economy in some coca growing regions of Colombia, and strains appear to have been developed that are more resistant or immune to their use. Whether these strains are natural mutations or the product of human tampering is unclear. These strains have also shown to be more potent than those previously grown, increasing profits for the drug cartels responsible for the exporting of cocaine. Although production fell temporarily, coca crops rebounded as numerous smaller fields in Colombia, rather than the larger plantations. of coca has become an attractive, and in some cases even necessary, economic decision on the part of many growers due to the combination of several factors, including the persistence of worldwide demand, the lack of other employment alternatives, the lower profitability of alternative crops in official crop substitution programs, the eradication-related damages to non-drug farms, and the spread of new strains of the coca plant. cocaine would be highly desirable to the illegal drug industry, as it would eliminate the high visibility and low reliability of offshore sources and international smuggling, replacing them with clandestine domestic laboratories, as are common for illicit methamphetamine. However, natural cocaine remains the lowest cost and highest quality supply of cocaine. synthesis of cocaine is rarely done. Formation of inactive enantiomers (cocaine has 4 chiral centres - 1R,2R,3S,5S - hence a total potential of 16 possible enantiomers and disteroisomers) plus synthetic by-products limits the yield and Note, names like 'synthetic cocaine' and 'new cocaine' have been misapplied to phencyclidine (PCP) and various designer drugs. in a charango, 2008. criminal gangs operating on a large scale dominate the cocaine trade. Most cocaine is grown and processed in South America, particularly in Colombia, Bolivia, Peru, and smuggled into the United States and Europe, the United States being the worlds largest consumer of Cocaine , where it is sold at huge markups; usually in the US at $50-$75 for 1 gram (or a "fitty rock"), and $125-200 for 3.5 grams (1/8th of an ounce, or an "eight ball"). shipments from South America transported through Mexico or Central America are generally moved over land or by air to staging sites in northern Mexico. The cocaine is then broken down into smaller loads for smuggling across the The primary cocaine importation points in the United States are in Arizona, southern California, southern Florida, and Texas. Typically, land vehicles are driven across the U.S.-Mexico border. Sixty Five percent of cocaine enters the United States through Mexico, and the vast majority of the rest enters through Cocaine is also carried in small, concealed, kilogram quantities across the border by couriers known as “mules” (or “mulas”), who cross a border either legally, e.g. through a port or airport, or illegally through undesignated points along the border. The drugs may be strapped to the waist or legs or hidden in bags, or hidden in the body. If the mule gets through without being caught, the gangs will reap most of the profits. If he or she is caught however, gangs will sever all links and the mule will usually stand trial for trafficking by him/herself. traffickers from Colombia, and recently Mexico, have also established a labyrinth of smuggling routes throughout the Caribbean, the Bahama Island chain, and South Florida. They often hire traffickers from Mexico or the Dominican Republic to transport the drug. The traffickers use a variety of smuggling techniques to transfer their drug to U.S. markets. These include airdrops of 500–700 kg in the Bahama Islands or off the coast of Puerto Rico, mid-ocean boat-to-boat transfers of 500–2,000 kg, and the commercial shipment of tonnes of cocaine through the port of Miami. Bulk cargo ships are also used to smuggle cocaine to staging sites in the western Caribbean–Gulf of Mexico area. These vessels are typically 150–250-foot (50–80 m) coastal freighters that carry an average cocaine load of approximately 2.5 tonnes. Commercial fishing vessels are also used for smuggling operations. In areas with a high volume of recreational traffic, smugglers use the same types of vessels, such as go-fast boats, as those used by the local populations. drug subs are the latest tool drug runners are using to bring cocaine north from Colombia, it was reported on March 20, 2008. Although the vessels were once viewed as a quirky sideshow in the drug war, they are becoming faster, more seaworthy, and capable of carrying bigger loads of drugs than earlier models, according to those charged with catching them. Sales to consumers readily available in all major countries' metropolitan areas. According to the Summer 1998 Pulse Check, published by the U.S. Office of National Drug Control Policy, cocaine use had stabilized across the country, with a few increases reported in San Diego, Bridgeport, Miami, and Boston. In the West, cocaine usage was lower, which was thought to be due to a switch to methamphetamine among some users; methamphetamine is cheaper and provides a longer-lasting high. Numbers of cocaine users are still very large, with a concentration among urban youth. In addition to the amounts previously mentioned, cocaine can be sold in "bill sizes": for example, $10 might purchase a "dime bag," a very small amount (0.1–0.15 g) of cocaine. Twenty dollars might purchase .15–.3 g. However, in lower Texas, it's sold cheaper due to it being easier to receive: a dime for $10 is .4g, a 20 is .8-1.0 gram and a 8-ball (3.5g) is sold for $60 to $80 dollars, depending on the quality and dealer. These amounts and prices are very popular among young people because they are inexpensive and easily concealed on one's body. Quality and price can vary dramatically depending on supply and demand, and on geographic prices are astronomical compared to those in the USA, with £40 (typically $80) getting 1 gram of cocaine (compared to $20-$40 in the USA). Monitoring Centre for Drugs and Drug Addiction reports that the typical retail price of cocaine varied between 50€ and 75€ per gram in most European countries, although Cyprus, Romania, Sweden and Turkey reported much higher values. Bags of cocaine, adulterated with fruit flavoring. cocaine consumption currently stands at around 600 metric tons, with the United States consuming around 300 metric tons, 50% of the total, Europe about 150 metric tons, 25% of the total, and the rest of the world the remaining 150 metric tons or 25%. Cocaine is "cut" with many substances such as: According to a 2007 United Nations report, Spain is the country with the highest rate of cocaine usage (3.0% of adults in the previous year). Other countries where the usage rate meets or exceeds 1.5% are the United States (2.8%), England and Wales (2.4%), Canada (2.3%), Italy (2.1%), Bolivia (1.9%), Chile (1.8%), and In the United Cocaine is the second most popular illegal recreational drug in the U.S. (behind marijuana) and the U.S. is the world's largest consumer of cocaine. Cocaine is commonly used in middle to upper class communities. It is also popular amongst college students, to aid in studying and as a party drug. Its users span over different ages, races, and professions. In the 1970s and 80's, the drug became particularly popular in the disco culture as cocaine usage was very common and popular in many discos such as Studio 54. Household Survey on Drug Abuse (NHSDA) reported in 1999 that cocaine was used by 3.7 million Americans, or 1.7% of the household population age 12 and older. Estimates of the current number of those who use cocaine regularly (at least once per month) vary, but 1.5 million is a widely accepted figure within the use had not significantly changed over the six years prior to 1999, the number of first-time users went up from 574,000 in 1991, to 934,000 in 1998 – an increase of 63%. While these numbers indicated that cocaine is still widely present in the United States, cocaine use was significantly less prevalent than it was during the early 1980s. Monitoring the Future (MTF) survey found the proportion of American students reporting use of powdered cocaine rose during the 1990s. In 1991, 2.3% of eighth-graders stated that they had used cocaine in their lifetime. This figure rose to 4.7% in 1999. For the older grades, increases began in 1992 and continued through the beginning of 1999. Between those years, lifetime use of cocaine went from 3.3% to 7.7% for tenth-graders and from 6.1% to 9.8% for high school seniors. Lifetime use of crack cocaine, according to MTF, also increased among eighth-, tenth-, and twelfth-graders, from an average of 2% in 1991 to 3.9% in 1999. and disapproval of cocaine and crack use both decreased during the 1990s at all three grade levels. The 1999 NHSDA found the highest rate of monthly cocaine use was for those aged 18–25 at 1.7%, an increase from 1.2% in 1997. Rates declined between 1996 and 1998 for ages 26–34, while rates slightly increased for the 12–17 and 35+ age groups. Studies also show people are experimenting with cocaine at younger ages. NHSDA found a steady decline in the mean age of first use from 23.6 years in 1992 to 20.6 years in 1998. Drug Test Kits ? ? ? Questions ? ? ? Give Us A Call:
<urn:uuid:62154b13-2fff-4de0-ae6e-6e0f3e837c33>
CC-MAIN-2016-26
http://cocaine-drug.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924398
13,619
3.1875
3
|Multi language programming [email protected] (1995-11-03)| |Re: Multi language programming [email protected] (1995-11-12)| |Multi language programming [email protected] (Dave Lloyd) (1995-11-13)| |Re: Multi language programming [email protected] (1995-11-17)| |Re: Multi language programming [email protected] (1995-11-20)| |Re: Multi language programming [email protected] (1995-11-21)| |Re: Multi language programming [email protected] (Dave Lloyd) (1995-11-27)| |From:||[email protected] (Frederic Guerin)| |Organization:||Universite de Montreal| |Date:||Fri, 3 Nov 1995 17:15:01 GMT| Hello compiler wiz, Suppose I got a software written using 2 different languages, say A and C, where C is a fixed language. How should A be designed when it comes to interaction with C so that the software can be compiled on many platform/machines without modifications ( or as little as possible ) ? There is the C++ way of doing : // some declarations This works fine if every compilers of that language use a common naming and calling convention, given a platform/machine ( or if the compiler for ALanguage is included which is the case for C with respect to C++ ). Hence the question : Can it be assumed that for a given modular language like C, FORTRAN, MODULA the naming and calling convention is unique for a given platform/machine ? But there are languages which for sure do not follow this principle, e.g. C++ which use a name mangling which can be in principle specific to a Hence another question : How A should be designed to handle such a situation ? P.S. I guess this situation requires some code modification but how can it be minimal ? Thanks to everyone, I'll post a summary of mail received, [Sad to report that I've known lots of cases where different compilers for the same language on the same machine use different calling sequences and name mangles. Even C compilers usually mangle a little, e.g. adding _ before the name. -John] Return to the Search the comp.compilers archives again.
<urn:uuid:2c5c75ab-2a4f-452b-b8ab-355094a42fb4>
CC-MAIN-2016-26
http://compilers.iecc.com/comparch/article/95-11-044
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.774213
565
2.578125
3
Letting Biodiversity Get under Our Skin Some aspects of dirty living can be healthy. A new study posits that the decline of plant and animal diversity in cities may be linked to the recent surge of allergies and other chronic inflammatory diseases. By Rob Dunn We live at the crossroads of three global megatrends, three barreling and intertwined juggernauts of modernity. The first is the massive migration of humanity to the world’s cities. I grew up in a small town, walking deer trails beneath the shade of maples and oaks. Now I live with my kids in a city where the path beneath our feet is ever more likely to be paved. My story is our story. By 2050, two-thirds of all humans on Earth will live in cities. The second is the loss of biodiversity. Species are disappearing, both from the places where we live and from the earth as a whole. If our hairy ancestors were to visit our cities and suburbs, they would wonder how the escalators work, but they would also question where the plants and animals have gone. What have we done with all the birds? Some, like the Carolina parakeet, are just gone. Others live on, but at a distance—geographically removed from our daily lives, far away from the majority of people. And then there’s the third trend—the one that, at first glance, seems not to belong with the others. The prevalence of allergies and chronic inflammatory diseases among urban populations in developed countries has skyrocketed in recent years. Incidences of asthma, Crohn’s disease, multiple sclerosis, and even depression (which can have an immune component) are on the rise. The parallels in geography and timing between urbanization, the loss of biodiversity, and the rise in immune-system problems raise an intriguing—and troubling—question. Could our distance from nature and our chronic immunological discontent be related? Some now say . . . yes. In May 2012, a team of Finnish ecologists, allergy specialists, molecular biologists, and immunologists led by Ilkka Hanski at the University of Helsinki announced the results of a study comparing the allergies of adolescents living in houses surrounded by biodiversity to those of adolescents surrounded by simplicity—the modern landscape of cement and grass. (1) They found that those individuals who lived in houses surrounded by a greater diversity of life were themselves covered with different kinds of microbes. They were also less likely to show the telltale immunological signs of allergies. In the years to come, we may regard these results as a new threshold to our understanding biodiversity. What Hanski and others have posited—that the loss of contact with a diversity of other species is making us sick—is almost unprecedented in the long history of our medical understanding of the body. It is the opposite of the germ theory of disease. If the germ theory is the idea that the presence of bad species can make you sick, the growing sense seems to be that the opposite can also be true. We can get sick because of the absence of good species—or even just the absence of the diversity of species. The possible link between biodiversity and human health has been tossed around for a while. Half a dozen theories—biophilia, nature deficit disorder, the deficiency theory of disease, the dilution effect, and more—describe the ways in which the loss of a connection to biological richness might cause us to ail. Elements of these theories are at the core of modern ecology. Less biodiverse systems—be they grasslands, forests, or the biomes of tiny life on our skin and in our guts—are less resilient and at greater risk of invasion (whether by pathogens or weeds) than more diverse systems. Allergies were not part of the story until the early 1980s, and even then they were considered separately, as though part of another tale with a different beginning and different ends. Epidemiologists began to notice differences between the immune systems of city kids and farm kids. Farm kids were less likely to have allergies. A million things are different between cities and farms—education, refrigeration of food, exercise, exposure to the sun, exposure to toxins—and any one of them might affect children’s immune systems. Many explanations were suggested. But David Strachan, an epidemiologist at St. George’s University of London, had a curious idea, which he called the hygiene hypothesis. The key was bacteria; the lock was our immune system. Perhaps urban kids were too distant from microbial nature for their immune systems to develop properly. Farm kids work in the dirt. They touch farm animals. They are exposed to more life, be it cows, chickens, or—as Strachan suspected—the microbes that cows and chickens harbor. It was a wild, speculative idea. It also increasingly appears to have been right. Progress in testing the hygiene hypothesis has been incremental rather than revolutionary. Farm kids, particularly those who interact with farm animals, do suffer from fewer allergies. And in general, it is beginning to seem as though exposure to bacteria and/or parasitic worms early in life may be necessary to forestall the development of allergies. In West Africa, children who had parasitic worms were at increased risk of allergies when those worms were removed. In Detroit, houses with dogs had more kinds of bacteria than those without. Pregnant women living in those same doggy houses were less likely than women in dogless houses to show evidence of allergy in their umbilical-cord blood. (The presence of an allergic response (atopy) in umbilical-cord blood has been shown to predispose children to allergies once they are born.) In laboratories, mice without skin bacteria failed to develop normal immune systems. Add skin bacteria back, and their defenses were restored. None of these effects is simple. They come with caveats and clauses, but we should not expect an ecological interaction to be easy to understand. No one going to a play with hundreds of characters expects it to be short. Yet, as complex as the connections might be, consensus has begun to emerge that some aspect of “dirty” living is good. Bacteria seem to be part of the useful dirtiness, but which bacteria? Or maybe the question is, how many? Or what mix? Studies tend to refer to “missing microbes” as if they were some great mass—a heaving, metabolizing pile of life that, Buddha-like, needs to be rubbed for health. But it’s not yet been established whether we are missing interactions with lots of microbes, lots of kinds of microbes, or something else. The trouble is, thousands of bacteria can be found on the average human body, perhaps tens of thousands in the average house—and far more in backyards, farms, and the wild. Microbiologists have barely scratched the surface in their attempts to calculate the sublime magnitude of their quarry. What can be said with certainty is that, as we have become more urban and as we have transformed the world, we have also become experts at replacing habitats filled with many species with habitats populated by just a few. We plant inert cement where forests once grew. We clean and scrub our houses with antibiotic wipes. We overuse antibiotics to clean out pathogens in our bodies. We overuse antimicrobials to clean everything else. One can now even buy underpants preloaded with chemicals that clean away the bacteria below the belt. The word “clean” seems wholesome, but what it usually means is kill. We kill some species and favor others. We once cleaned the predators and snakes from around our homes. Now that the snakes and predators are gone, we clean what is invisible. As we do, we kill the life most susceptible to our weapons. In their place grows a more depauperate and resistant wildness—nature despite us, not for us—a jungle of potentially dangerous weeds. We are reducing diversity in our daily lives, even on our bodies, in exactly the same way that we are reducing it in the world. We manage our own flesh as we manage the earth. This parallel caught Hanski’s attention, and he wondered whether he could take the hygiene hypothesis a step further. Could the loss of biodiversity—the number of kinds of species, not the presence of some particular form—lead our immune systems to break in such a way that they can no longer distinguish wholesome friends from ancient enemies? It was an idea already suggested in the work on dogs in houses, but Hanski thought he could carry it a step further, out-of-doors. The Wright brothers did not take off in a thunderstorm, and Hanski, for his part, chose to begin his work where he could control as many extraneous factors as possible. Hanski is highly regarded for the care he takes in designing studies; he chooses circumstances that reduce the wilderness to its simplest elements, whether that means studying flies on dead animals, beetles in dung, or the waxing and waning of populations of butterflies in patches of grass. It was the elegance of this approach that won him the Crafoord Prize, the most prestigious prize in ecology. Not content to rest on his laurels, Hanski set out to expand his work on why rare species decline—and to begin probing the consequences of those declines. In studying households, Hanski wanted to work with houses he could know in minute detail. He would work in his native Finland, where biodiversity was low to start with, low enough to be knowable, if not yet known. He chose to study a city and region in Finland where few people move very far, where the microbes they are born around might be similar to those among which they die. He then focused on adolescents to control the impact of age. If biodiversity was in fact affecting allergies, Hanski would maximize his chances of seeing the effect. Hanski randomly selected 118 adolescents in an equal number of homes within a 100 kilometer–by–150 kilometer area. Some of the homes were, by chance, in the city, and others stood alone out in woods or on farms. Hanski and his crew visited those houses, armed with needles and plant presses. They drew blood from each adolescent and screened the samples for evidence of allergies. To measure the diversity of bacteria on the adolescents’ skin, Hanski and his crew swabbed their forearms, then amplified and sequenced the DNA present. The approach was standard: they needed just a tiny patch of skin to represent the life of the whole. Measuring the biodiversity outside took the most work. Hanski chose to survey plants. Plants don’t move, which makes them easy to count; it might also (although this is pure speculation) make them more likely to accumulate microbes as they settle out of the drifting snow of bacteria-laden air. Hanski and his crew of ten field assistants counted and identified every plant in each and every backyard. They did ecology in the way Hanski has done it throughout his entire career. The idea was to test whether places with high biodiversity outdoors tended to have high microbial biodiversity indoors, which would in turn lower the inhabitants’ risk of allergic diseases. In retrospect, it seems unlikely that Hanski and his colleagues would find a strong relationship between plant biodiversity, microbes, and allergies. If you are studying patches of grassland and butterflies, there are relatively few species in play. One can reasonably expect to understand the main factors that influence where they occur. But thousands of species live on the human body—most of which have not yet been named, much less well understood. The microbe communities found on different body parts of a particular person—say the tongue and the toe—are predictably different. The microbes of a tongue never remotely look like those of a toe. But why your tongue has species so different from my tongue has been impossible to explain. An individual body encounters tens of thousands or more bacteria in its lifetime. Just which ones stick and establish themselves might be mostly a matter of chance. Yet, when Hanski and his colleagues looked at their data, they found a remarkably clear pattern. Higher native-plant diversity appeared to be associated with altered microbial composition on the participants’ skin, which led in turn to lower risk of allergies. One group of microbes, the gammaproteobacteria, seemed to be particularly strongly associated both with plant diversity and with allergies. Unbeknown to Hanski, more than 40 years earlier this same group of bacteria had been shown to wax and wane on human skin with variation among the seasons. Hanski and his colleagues found that the bacteria also vary in space. It didn’t matter whether they considered allergies to cats, dogs, horses, birch pollen, timothy grass, or mugwort. In each case, individuals with more kinds of gammaproteobacteria on their bodies were less likely to have allergies. No one had ever shown this before. No one seems ever to have looked. When I consulted my colleagues about the results, some were excited. Others were skeptical. Maybe the analysis wasn’t quite right. Maybe Hanski focused too much on the gammaproteobacteria and not enough on other kinds of bacteria. But all agreed that, as they went forward with their research, they would be looking for similar effects. Can the wildness outside sneak all the way inside? No one has offered a very compelling explanation of how the diversity of plants or life in general in backyards alters the composition of bacteria on human skin. It is too early to know the answer. But the bigger question is how the composition of bacteria on our skin (perhaps in concert with the diversity of plants and other organisms outside) influences our potential to develop allergies. Several options have emerged. The biodiversity of the gammaproteobacteria and other bacteria might directly benefit us. We tend to think of the immune system as our body’s attack dog. It is not. The primary role of the immune system is to distinguish deadly species from good species and, some argue, good species from simply innocuous ones. The attacks are secondary—the easy part. In this way, the immune system is our sixth sense. It is our inner taxonomist. And this inner taxonomist needs to see a lot of species to learn to distinguish good from bad from innocuous. If it does not, it makes mistakes. It sees our body’s own cells or pollen grains and judges them to be dangerous. In this model, the world around us needs to be diverse enough for our immune system to gain perspective. Or maybe, as Hanski and his colleagues have suggested (and as the studies of dogs have suggested independently), the odds of having some beneficial bacteria species in a house increase with certain kinds of microbial diversity. The diversity of the gammaproteobacteria or other bacteria in this telling would be a kind of insurance policy. Finally, a third possibility harks back to ancient wars. Bacteria and fungi compete. Fungi are everywhere in households and, in contrast to bacteria, seem more likely to cause allergies than to prevent them. Fungal diversity appears to be lower in houses where bacterial diversity is higher. Maybe more diverse household bacteria can fight off fungi, winning an invisible war on our behalf. Hanski himself does not yet have enough perspective—nor data—to distinguish among explanations. Nor does anyone else. We wait. Perhaps we need something like an ecological theory of disease. Such an ecological theory of disease would posit that we can get sick either because we are afflicted by the presence of bad species or by the absence of good species—or a good mix of species. Such a theory would be new to the medical world and to society in general. We are good at killing species around our houses and on our bodies, but far less practiced at cultivating them. Yet, as much as the idea that some of the species around us are beneficial is foreign to doctors, it is old hat to ecologists. To ecologists such as Hanski, the interdependence of species is self-evident; the normal status of life is to be enmeshed in other life. Our conscious minds and progressive societies seem slow to realize this, but our subconscious immune systems may have known it all along. As we wait for more understanding, we continue to simplify the world. We will become more urban and thus more likely to suffer from allergies and autoimmune diseases, at least if Hanski is right. And if he is right, there may also be a way forward, a way out of our sick and simple morass. Could we rewild the places around us, plant a richness of species in our backyards and so raise healthier children covered in more kinds of bacteria? As a country boy who is living now in the city, raising two children, I hope so. Whatever we do, we will be measured by our immune systems and our microbes, which in their function or dysfunction seem to record the richness of our lives. 1. Hanski, I. et al. 2012. Proceedings of the National Academy of Sciences doi/10.1073/pnas.1205624109. Rob Dunn is a science writer and biologist in the Department of Biology at North Carolina State University. His first book, Every Living Thing, told the stories of the sometimes obsessive, occasionally mad, and always determined biologists who sought to discover the limits of the living world. His new book, The Wild Life of Our Bodies, explores how changes in our interactions with other species—be they the bacteria on our skin, forehead mites, or tigers—have affected our health and well-being. Rob lives in Raleigh, North Carolina, with his wife, two children, and lots of microbes. Artwork: Frank Moore, Release, 1999, Courtesy Sperone Westwater ©Gesso Foundation, New York
<urn:uuid:c0f2b973-47ed-4c33-b257-81acbca46851>
CC-MAIN-2016-26
http://conservationmagazine.org/2012/09/biodiversity-under-our-skin-2/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968124
3,671
2.703125
3
Controlled burn helps create habitat Earlier this month, you may have been concerned about the smoke slowly filling the sky east of Broken Bow. Grass fires are not uncommon this time of year and the weather has been a bit drier. However, you can rest at ease as what you were seeing was the result of a controlled burn just outside of Berwyn, a fire started on purpose by Robert Harrold and his crew at Prescription Pyro, a custom burning business in Broken Bow. Habitat management was the goal. Fire is an intrinsic part of nature and can be either destructive or beneficial. Change by fire is biologically necessary to maintain many healthy ecosystems. Wildlife managers have learned to use fire to cause changes in plant and animal communities to meet their objectives. Native Americans used fire to clear undergrowth in pine forest to improve deer hunting. Early Colonial settlers learned from the Indians and used the practice for their own benefit. This tradition was continued for years until booming population growth created a danger from fires to homes and decorative trees. Foresters called for a halt to burning, even natural ones typically started by lightning. This policy eventually had disastrous consequences, as without the natural cycle of fire, forests became choked with undergrowth and dead timber. The most tragic example was the Great Yellowstone Fire of 1988 that raged for several months affecting 800,000 acres (more than a third of the national park). Policies were soon changed to allow for a more natural cycle of fire under the watchful eye of the National Forestry Service. The intention of the burn on Sunday was to purge the area of overgrowth and get rid of the cool season grasses. One of the best ways to develop a property for wildlife is through habitat management. By definition, habitat management is manipulating the habitat as necessary to provide all the needed essentials wildlife requires on a year round basis. One of the most important tools for managing the habitat on a property is controlled burning (also called prescribed burning). Besides eliminating evasive, non-native species, prescribed burning releases nutrients into the soil, which stimulates the growth of high quality native grasses, forbs and legumes. Harrold and his team cleared approximately 40 acres owned by Nebraska One Box Habitat Chairman Bob Allen. With the land cleared and the soil nutrients enriched by the fire, volunteers of the Nebraska One Box will replant the area with native prairie seed (drilling with equipment from Arrow Seed). Enhancing the warm season grasses will hopefully create better winter habitat for wildlife. Certain species such as pheasant and quail require specific cover types for nesting that can only be brought about through a fairly frequent prescribed burning program, while others such as deer and turkey can be maintained in areas receiving less frequent prescribed burning. Nebraska One Box has spent a serious amount of time and resources in the area in an effort to improve wildlife habitat and increase pheasant populations.
<urn:uuid:93b96533-e252-47d0-8cea-ba108fb5d255>
CC-MAIN-2016-26
http://custercountychief.com/content/controlled-burn-helps-create-habitat
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960389
577
3.265625
3
It is always important to know exactly what is in the food we eat and seek out ingredients that benefit us in the best ways. Many of the foods available in grocery stores and popular fast food items are full of nasty ingredients that do way more harm than good. From corn syrup to sugar, some of the things on this list may surprise you but when it comes to choosing food that's good for you. These ingredients should be avoided at all costs. High Fructose Corn Syrup This ingredient is pretty notorious, but it definitely lives up to its reputation. High Fructose Corn Syrup is full of nasty, unneeded calories, bad hormone boosters, and has the ability to cause overeating and unnecessary weight gain. Yellow 6, Blue 1 & 2, Red 3, and Green 3 are all artificial food colorings that have been linked to things like brain and kidney cancer. Food dyes can be hard to avoid, especially since they're used in everything from pet food to sausages, but finding foods that only use natural dyes is a great start! As if you needed another reason to avoid sodas and artificially flavored juices. This ingredient when used in drinks is combined with ascorbic acid and the two together have been known to cause cancer. BHA & BHT (Butylated hydroxyanisole & butylated hydroxytoluene) Typically used as a preservative to keep household foods like cereal, potato chips, and candy from going stale quickly, BHA & BHT are commonly known as cancer-causing carcinogens that have also been linked to things like insomnia, hair loss, and liver damage. While sugar can't (and shouldn't ) always be avoided, recognizing added sugars in processed foods as harmful is definitely important. Choose natural sugars, such as those in fruit over added sugars in things like drinks and snacks. Artificial sweeteners like Splenda and Sweet 'n Low, market themselves as being better than real sugar but they're actually much worse. They can be extremely hard on our metabolisms and add tons of useless, harmful calories to anything that they're combined with. Found in things like canned soup, fast food, and diet sodas, MSG is a "flavor enhancer" that has been known to damage cells to the point of cell death, cause type II diabetes, and contribute greatly to obesity.This post was originally published on Laila Ali Lifestyle
<urn:uuid:28f62776-1eaa-4f30-969e-cf1fc7ec2a33>
CC-MAIN-2016-26
http://dailytoa.st/blogs/7-food-ingredients-to-beware-of
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961789
499
2.53125
3
(includes deep brain stimulation, the brain and the heart, imaging, dendritic cells, science in the schools, Lasker Award) Through our science and health grants, Dana funds pilot studies to test new hypotheses about how the brain and the immune system function in health and disease. Many of our grants fund small, promising, first-in-patient studies; researchers use the preliminary data they acquire to support their applications for federal funds for extended study. We favor new investigators with bold ideas as a way to draw talent into these fields, and we fund established investigators who wish to take a leap in a new direction or test a fledgling theory. We encourage experienced researchers to mentor newer colleagues. We reward people who take risks and have fresh ideas. Our science grant sections include brain and immuno-imaging, clinical neuroscience research, human immunology, and neuroimmunology. Application guidelines can be found online for each branch of research at dana.org/grants. We also occasionally sponsor workshops and forums for scientists addressing specific research questions in these areas, and we provide support for new researchers to receive mentored training or to attend research-related meetings. Our funding reflects our priorities: understanding the processes involved in brain and immune-related diseases; assessing the effects of experimental treatments; describing how the healthy brain and immune system work, and how malfunctions lead to brain and immune disorders; and adapting existing technologies and refining new techniques to improve diagnostic and treatment research and clinical practice. In 2007 we awarded nearly $26 million in direct grants and programs. This past year, investigators whose work we support have advanced treatments for several brain disorders, including the use of electrodes implanted in the brain (deep brain stimulation, or DBS) for intractable depression, Parkinson’s disease, and to help a few patients in a minimally conscious state. Other grants have explored possible connections between heart disease and depression and how the immune system's dendritic cells work to identify invaders and to marshal immune defenses against them. Others have refined techniques for imaging the brain and immune cells in action. Here are some highlights: Deep Brain Stimulation As researchers have learned more about the neurobiology of disorders, they have turned their attention from exploring the functions of specific areas in the brain to understanding how networks of brain cells connect specific brain regions to support various cognitive functions. Researchers also are exploring how malfunctions in specific neural networks may be involved in complex brain disorders. For instance, new efforts to identify and understand the structure and functioning of neural networks are helping neurosurgeons to pinpoint optimal locations for placing deep brain stimulators in patients with epilepsy and Parkinson’s disease who do not respond to medical therapies. Unlike surgical treatment for these disorders, deep brain stimulation affects electrical patterns in the brain and provides a flexible way to adjust treatment—the devices can be turned up or down, off and back on—to provide maximal therapeutic benefit. On the basis of successes in treating diseases of motor circuits, such as severe essential tremor, Parkinson's disease, and dystonia, some Dana-funded researchers now are investigating whether DBS can therapeutically alter circuits in the brain’s limbic system or associative networks—the ones that are involved in our moods, our integration of ideas and experiences, and even our consciousness. Parkinson's disease: In people with Parkinson's disease that is no longer adequately managed with drug therapies, deep brain stimulation can help to control tremors and other disease symptoms. But surgeons are not always sure exactly where in the brain's subthalamic nucleus to place the electrodes for the best results for each person, which means they must do the surgery in two or more stages, having the patient awake during one session to respond to surgeon's questions. Support from a 2004 Dana grant is enabling Peter Brown, M.A., M.D., F.R.C.P., and Marwan Hariz, M.D., of University College in London to test two potential methods of better identifying the target area in each patient for placing the electrodes. On the basis of studies in animals with symptoms of motor diseases, they theorize that measuring the response of certain motor cells can predict good DBS implantation sites, a process that can be accomplished while the patient is under anesthesia. Mood: In early 2007, Helen Mayberg, M.D., and colleagues at Emory University expanded their research on deep brain stimulation in people with severe depression who do not respond to available drug therapies. With an initial Dana grant in 1995, Mayberg and her colleagues at the University of Texas compared brain scans and other data from people with severe depression who responded to medications and those who did not. After years of study at Texas and thereafter at the University of Toronto, Mayberg and colleagues identified a handful of brain regions that seemed to be consistently involved in severe depression. In 2003, in Toronto, they started testing the effects of using DBS in one of these regions, called cingulate area 25. In four of the six people with intractable depression who underwent the procedure, DBS had an immediate effect, unlike drugs and therapy in people with treatable depression, which can take weeks to ease symptoms. The DBS therapy’s effect was sustained for six months with continued stimulation. Patients are conscious during the surgery, describing how they feel so the surgeons can be sure to get the placement and electrical dosage correct, and some reported an immediate lifting of their spirits, as if a giant weight had fallen away. A 2006 Dana grant, along with funding from other sources, is enabling Mayberg to study an additional 20 patients, from screening through five years of follow-up. The researchers aim to understand which specific neural circuits give rise to severe depression, how deep brain stimulation affects these circuits, and whether the treatment is effective as a long-term remedy. Consciousness: The work of Nicholas Schiff, M.D., and Joseph Fins, M.D., made headlines in 2007 when they announced the first results of their study using deep brain stimulation in the thalamus of carefully selected adults who have been in a minimally conscious state. One of these patients, a 38-year-old, received DBS treatment this past year after a mugging in 1999, in which he received kicks to the head, left him unable to walk, talk, feed himself, or respond to people. He now can communicate reliably with gestures as well as with short spoken phrases, can track people's movements with his eyes, and can take all of his meals by mouth. Schiff, Fins, and their colleagues continue to monitor his progress. They plan to test DBS treatment in up to a dozen other minimally conscious adults, selected on the basis of imaging studies suggesting that their injured brains may be creating alternative communication networks. The researchers are using brain imaging studies to determine how specific brain areas in adults who have recently emerged from minimal consciousness differ from those who are at the upper limits of minimal consciousness but have not "re-awakened." They also are guiding doctors and patients' families in dealing with the ethical issues involved in this research—from deciding whether the patient should participate in the research to determining when to stop treatment—where such decisions may vary depending on the patient's level of consciousness. Schiff and Fins pioneered the development of ethical guidelines for undertaking experimental studies in these patients with the help of a 2003 Dana grant. With the current grant, they aim to build a scientific basis for distinguishing among levels of minimal consciousness by using imaging and behavioral tests. When doctors can know more about behavioral evidence of minimal consciousness and correlate these behaviors with evidence, from imaging, that new connections between brain cells are forming effective networks, they should be better able to predict which minimally conscious patients are candidates for DBS and where best to surgically implant the DBS electrodes. This research differentiates adults who are at various levels of minimal consciousness from people who are in a “persistent vegetative state,” a term coined years ago by Fred Plum, M.D. Under Dana funding in the 1990s, Plum characterized people in a persistent vegetative state as those with severe brain damage whose “autonomic” nervous system functions to maintain organ functions and reflex actions and who have normal sleep and wake cycles, but who have no detectable awareness. People in minimally conscious states, in contrast, provide clear but inconsistent behavioral evidence of consciousness. Arthritis: Using a Dana grant awarded in 2007, Kevin Tracey, M.D., and colleagues in the North Shore–Long Island Jewish Health System will conduct the first study in people of whether deep brain stimulation of the vagus nerve—which connects the brain to the heart and other organs—can control inflammation and ease painful joint symptoms in people who have intractable autoimmune rheumatoid arthritis. The approach is based on findings that the vagus nerve, which helps to control painful inflammation, is underactive in people who have rheumatoid arthritis. DBS of the vagus nerve might therefore reduce inflammatory joint pain. In this case, no surgery is involved—instead of electrodes implanted deep in the brain, the stimulation is externally applied by a device attached to the outer ear, which is directly connected to a branch of the vagus nerve. In healthy volunteers, the stimulation does affect the vagus nerve; now Tracey will investigate whether the same is true in people with rheumatoid arthritis, and, if so, whether stimulation reduces painful joint inflammation. Conferences: In 2007, the Foundation was a sponsor of the Disorders of Consciousness conference of the Association for Research in Nervous and Mental Disease. Topics included how to use advances in imaging and neuroscience to improve the evaluation, treatment strategies, and care of people with injury-related disorders of consciousness. We also were a sponsor of a conference to develop scientific and ethical guidelines for clinical trials of deep brain stimulation to treat mood and behavioral conditions. Co-sponsors included the National Institute of Neurological Disorders and Stroke and the National Institute of Mental Health. The Brain and the Heart For the past two decades Dana has funded research on how signals from the brain may influence disease elsewhere in the body, and vice versa. These studies tend to be larger, long-term efforts, as researchers look at a range of patients and follow their progress in health or into sickness. Heart surgery and the brain: Dana grants have helped fund a longitudinal study led by Guy McKhann, M.D., at Johns Hopkins University, investigating possible effects on the brain of heart surgery, based on bedside reports that surgeries such as coronary artery bypass grafting caused cognitive decline. Theories attributed this effect to anesthesia, or posited that surgery using a pump to infuse the blood with oxygen, bypassing the heart, might induce tiny blood clots, which then traveled to the brain and produced "mini-strokes." But after the first decade of tracking such bypass heart surgery patients and comparing them to several control groups, Hopkins researchers have concluded that neither of these theories is correct: In their judgment, it is underlying vascular disease, not the surgery, that is primarily responsible for the long-term decline in cognitive functioning in people with coronary artery disease. Also running counter to current thinking were their data showing that heart surgery does not increase the likelihood of depression in these patients; low mood going into the procedure is the best predictor of low mood coming out of it. Through brain imaging, they also have found that nearly one-quarter of the heart surgery patients had evidence of having had a "silent" stroke, one that went unnoticed or undiagnosed, at some time before their surgery. Over the next three years, the researchers will use statistical analyses to compare the effects on the brain of the two methods of bypass surgery (on the bypass pump or off the pump), as well as continue to track the progress of vascular disease and brain performance in these more than 400 patients. What they are learning may change how doctors decide to treat each patient: Doctors and patients can choose surgery without worrying that the procedure will add to the risk of long-term cognitive decline. Tracing signals, making predictions: With Dana support, Brian Litt, M.D., and colleagues at the University of Pennsylvania have been developing and refining algorithms and computer-learning models to track the patterns of brain signals in people with epilepsy and to predict when their next seizure might occur, with the goal of trying to avert the seizure through medication or deep brain stimulation. Working with Klaus Lehnertz, Ph.D., at the University of Bonn, Litt has assembled an international collaborative group for knowledge and data exchange, aiming to perfect these algorithms. This group is currently putting together an international data archive for sharing the results of intracranial electrophysiology research. Litt's research has contributed to two implantable brain devices to treat epilepsy, both now in clinical trials. Litt and colleagues at the University of Pennsylvania and Georgia Institute of Technology are now applying the same monitoring and modeling techniques to heart signals in an effort to predict when a person is in danger of experiencing an episode of atrial fibrillation, an irregular heartbeat caused by abnormal signals in the two upper chambers (atria) of the heart. Atrial fibrillation is a common problem after surgery. If it were predictable, the episodes could be medically prevented or treated, improving chances for a good surgical recovery and reducing chances of stroke. In their pilot study of 49 people who had just had heart bypass surgery, the researchers found that by monitoring the heart electrocardiogram readings for four days following surgery, they could accurately predict an episode of atrial fibrillation more than 80 percent of the time. Litt will be applying for a federal grant to validate this prediction method in a larger, prospective study. Many Dana-funded investigators use the latest brain imaging technologies—and develop new ones—to find better ways to diagnose, treat, and prevent disease. From imaging regions of connected brain activation on down to the actions of molecules inside single cells, researchers extend our knowledge of all levels of brain activity and throughout the life span. Mapping: One of the most important uses of functional magnetic resonance imaging (fMRI) is to map for surgeons, prior to surgery, where speech and other key functions originate in the patient’s brain, in order to spare those areas to the extent possible during surgery for conditions such as epilepsy and brain tumors. But the imaging technique does not measure brain activity directly; instead it measures the amount of blood flow in the small blood vessels in specific brain areas. Greater blood flow in a specific area suggests that brain cells are active because they are using oxygen carried by the blood. Injuries such as brain tumors and strokes damage these small blood vessels, which might complicate the interpretation of the imaging results. John Ulmer, M.D., and colleagues at the Medical College of Wisconsin studied blood oxygen level data from functional magnetic resonance images of patients before, during, and after surgery. To determine the accuracy of the imaging they compared oxygen level data with other measures, signs, and symptoms of the patients. They found evidence of a disconnect wherein oxygen level data indicated that the areas of the brain were active and the results of other measures assessing brain activity suggested otherwise: some brain areas that were active appeared to be inactive according to the blood oxygen level. Ulmer and colleagues continue to fine-tune the map of the mismatches. Their Dana-sponsored research has led to the award of two federal grants to develop more-accurate imaging maps that take into account the shifts in blood flow, starting with the visual system and the sensorimotor system, which regulates all motor activity based on sensory input. Memory: How are memories formed at the molecular level in the brain? Ryohei Yasuda, Ph.D., and colleagues at Duke University have developed a new imaging system called two-photon fluorescence lifetime imaging microscopy to better see how. The system combines two-photon imaging, which visualizes molecules in cells in living brain tissue and can track the same molecules over time, with fluorescence resonance energy transfer (FRET) imaging, which shows how neighboring molecules affect one another. Their first focus is the protein Ras, which sends on-off signals to other neurons. Ras signaling is required for many forms of learning and memory; mutations in the signaling pathway are associated with cognitive and learning disabilities such as autism and mental retardation. Using their imaging system, Yasuda and colleagues have found that the Ras protein is required for maintaining long-term synaptic plasticity (strength of memory), but not in induction (memory formation). This knowledge eventually may lead to new therapies for these disorders. Autism: During fetal development, neurons produced at a similar time from progenitor cells migrate and layer themselves neatly together in columns in the cerebral cortex. In children with autism, imaging studies indicate that these columns are narrower and denser than in normally developing children. Song-Hai Shi, Ph.D., and colleagues at Memorial Sloan-Kettering Cancer Center will use a form of cellular imaging to see how progenitor cells form into neurons and migrate to form columns in a mouse model of a genetically produced form of autism called fragile X syndrome. This Dana New Investigator grant was awarded in 2007 with matching funds provided by the nonprofit organization Autism Speaks. Brain immune cells: The blood-brain barrier prevents most molecules that are carried by the blood from passing into the brain. This protects the brain from many common infections that occur in the rest of the body. When the brain is injured, the one type of immune cell that resides in the brain is activated. These "microglial" cells move to sites of brain injury and produce an inflammatory response. Recent evidence indicates that microglial cells identify the earliest stage of brain tumors, even though the tumor cells quickly hide and escape further detection. Researchers are working to describe the actions of microglial cells to learn how their responses might be weakened, in the case of inflammation, and strengthened, in the case of identifying and marshaling an immune attack against brain infections and cancers. Using two-photon microscopy, which can image living tissue up to a depth of one millimeter, Michael Dustin, Ph.D., Wen-biao Gan, Ph.D., and colleagues at New York University are observing at the cellular level in real time what happens when the brain is injured. In research published in 2007, they described how quickly microglial cells move to the site of an injury at a synapse (where two brain cells communicate) and what chemical signals them to move. Using the results from this Dana-funded research, they have obtained federal funding for further study. In the body's immune system, dendritic cells are the sentinels. They reside in small numbers in tissues that are in contact with the outside world, such as the skin and lining of the nose, stomach, and intestines. Tree-like in shape, their "branches" capture foreign materials in the body and present their captured prey to immune T cells so that the T cells can recognize the foreign materials and attack them. Sometimes this process goes awry, producing autoimmune diseases. In these diseases, immune cells mistake the body’s own cells as foreign and attack them. Researchers are studying how dendritic cells do their jobs for clues on how to help them work better. Helping the cells to strengthen their immune response could help fight infections such as AIDS and cancers. Similarly, helping dendritic cells to dampen immune responses to the body’s own cells in autoimmune diseases could ease symptoms and maybe even prevent autoimmune responses. Recognizing cancers: One of our first grants to a consortium was to a group of researchers led by Madhav Dhodapkar, M.D., at Rockefeller University, and Olivera Finn, Ph.D., at the University of Pittsburgh, who were looking into the immune system's response to cancer. Cancers start as a series of premalignant changes in the body. These early changes are usually not detected by people and their doctors because they do not show symptoms. Over time, the precancerous tissue may or may not develop into a malignant tumor. Contrary to scientific speculation that the premalignant disease is not detected by the immune system, the Pittsburgh and Rockefeller investigators have found that certain precancerous changes are indeed detected by patients' immune dendritic cells. In the past five years, they have discovered that the immune system can recognize some components expressed by “cancer stem cells," the cells that are thought to drive cancer development. Now they hypothesize that the strength of the immune reaction at this premalignant stage of disease is what determines whether people will eventually develop malignant cancer. The Rockefeller investigators are working to determine how the dendritic cells detect these premalignant antigens and how they signal specific adaptive immune cells (which learn to recognize specific antigens and attack them whenever they appear) to produce antibodies to attack the antigens. The Pittsburgh researchers are seeking to identify and measure the extent of the antibodies’ responses to the antigens. Together, they aim to show how and why a person’s immune response may be too weak to stop premalignancies from turning into cancers. That knowledge could help spur development of therapeutic vaccines designed to strengthen weak immune responses and prevent or treat cancers. Researchers in the Rockefeller lab also are investigating how the immune system responds to various cancer treatments. Dhodapkar and investigator Radek Spisek, Ph.D., found that one form of chemotherapy—the drug bortezomib—kills tumor cells in such a way that it may allow the immune system to recognize these cells in the future. If so, the drug could enhance the immune system’s ability to fight off the tumors. Newfound site of immune activation: While studying what causes inflammation in blood vessels (which can lead to vasculitis and atherosclerosis, the obstruction of blood flow from plaque deposits on the inner surface of arteries), Cornelia Weyand, M.D., Ph.D., and colleagues at Emory University were surprised to discover that human arteries do not just transport blood but also play a critical role in regulating immune responses. They found that the walls of arteries harbor dendritic cells that can sense microbes and instruct T cells to respond to those that are harmful and to ignore those that are not. Comparing vessels from different regions of the body, they found that these dendritic cells respond to different antigens. Weyand and colleagues also have found one type of cell receptor (the TLR4 ligand LPS) that can activate certain dendritic cells, which then set off vessel-wall inflammation. By stimulating this receptor, the researchers can mimic the conditions of vasculitis in the lab and study the kinetics of immune responses during the early and later phases of the disease. Following their lead, other researchers now are investigating the immune system's role in artery diseases. Watching the beginnings: As they pioneer the means to observe events in immune (lymphoid) organs in living mice, Ulrich von Andrian, M.D., Ph.D., and colleagues at the CBR Institute for Biomedical Research at Harvard Medical School have learned much about the earliest events in critical immune reactions. Using a molecular microscopy technique they call multiphoton intravital microscopy, they study how dendritic cells teach adaptive immune system T cells to recognize their targets. In a three-step process, dendritic cells teach T cells to recognize the invaders so that they can attack them. The researchers have found that the reactions of certain antigens differ in living tissue compared with tissue observed on lab slides. They also have discovered that memory T cells—which remember prior exposure to infections so that the immune system responds better the second time around—are enriched in a locale that had not been previously considered: the bone marrow, where they are visited by migrating dendritic cells that give them information about infections elsewhere in the body. This Dana-funded early work formed the basis of von Andrian's successful applications for a series of federal grants. Science in the Schools The Foundation’s science education grants support collaborations with other organizations to enhance and augment the neuroscience curricula being taught in K–12 schools. Programs include producing print and online materials that contain current, credible information on the brain; publishing teacher’s guides, holding workshops for teachers; and sponsoring talks by neuroscientists. Setting standards in math and science—and meeting them: The Charles A. Dana Center for Mathematics and Science Education at the University of Texas in Austin, continues to expand its work to strengthen U.S. mathematics and science education through research, teaching, and collaboration. At this Dana Center (our first, in operation since 1993), the focus is on K–12 learning, from supporting and training teachers to building curricula and advising regions and states on how to set learning goals to ensure that children gain the knowledge they need to thrive in the world today. In the late 1990s, the Dana Center in Austin helped coordinate new math and science standards for the State of Texas. Dana Center staff travel to low-performing schools across the state, listening to teachers and students and helping them to reorganize and focus on learning the ever-more-demanding science and math curriculum. The results are soon evident. The Jasper Independent School District, for example, in just the third year of a five-year training program in which Dana Center trainers work with math and science teachers both in class and separately, has already seen progress in the lower grades in science. Last year the center came to the aid of Sam Houston High School, the longest-running “academically unacceptable” school in the state, according to the Houston Chronicle. Partnering with the national education organization Achieve, Inc., the Center continues to coordinate the Urban Mathematics Leadership Network, which brings together district and state mathematics educators from across the country to share solutions to common pedagogical problems. In 2007, a network meeting in Austin focused on how best to teach students struggling to learn algebra. In late 2007, the State of Washington chose the Dana Center in Austin to lead the revision of that state’s K–12 mathematics learning standards. Though the Center’s work has widened from Texas to the nation, through national and international seminars and its work with Washington state, its goal remains to help school systems produce graduates with both strong math knowledge and skills and an understanding of how to think mathematically and apply those skills in learning, work, and life. 2007 Lasker Prize honors "discoverer" of dendritic cells The work of immunologists such as Madhav Dhodapkar, Olivera Finn, Cornelia Weyand, and Ulrich von Andrian builds upon the discoveries of others, especially Ralph Steinman, who with Zanvil Cohn coined the term "dendritic cell" in 1973 and since then has teased out first the very rare cells and then their characteristics and behavior. "Immune cells are like musicians in a symphony, each very talented and specialized," he says. "But they need a conductor and composer, and that's what dendritic cells are." For his discoveries and continuing work, Steinman, of Rockefeller University, was awarded the prestigious Albert Lasker Award for Basic Medical Research in 2007. Of all the systems in the body, the immune system is the one "you can really teach, really make better," Steinman said during a forum at the Dana Center in 2006. But, he added, we don't yet know all the rules. He suggests that researchers aim high by trying to understand the immune system in people rather than in experimental animals or lab test tubes. "What I'd like to see is that we set our standards on the medical conditions that involve the immune system" such as allergies, AIDS, and cancers, he said in 2007. "I think that's where the biggest scientific challenges are, and if we don't direct ourselves to these conditions, we won't have the standards high enough for what we need to know." Steinman works toward this goal role as senior consultant for the Dana Foundation's immunology grants program, which targets patient-oriented research. back to top
<urn:uuid:3437a3d7-c0a5-4124-87d3-90bb5c370892>
CC-MAIN-2016-26
http://dana.org/Publications/ReportDetails.aspx?id=44270
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946675
5,808
2.6875
3
(Editor's Note: This article was originally published on May 18, 2009. Your comments are welcome, but please be aware that authors of previously published articles may not be able to respond to your questions.) Crown vetch (Coronilla varia) is sometimes called axseed, axwort, hive-vine, or trailing crownvetch. To add to the confusion, it is sometimes referred to as Securigera varia. University of Missouri Extension considers the plant not a true vetch. Regardless of what name is preferred, crown vetch has landed on the list of unwelcome aggressive species in many northeastern states and parts of southern Canada. Originally imported to use for groundcover, it was quickly discovered that this member of the pea family (formerly Leguminosae, now Papilionaceae) was excellent for erosion control on slopes, soil rehabilitation, and roadside planting. Given free rein in nature, the plant adapted to all soil and environmental conditions. However, with care, crown vetch can be utilized in the landscape to provide attractive, no-maintenance ground cover for areas that are impossible to mow or maintain. In any instance, be sure this species is not planted in an area where it can spread to cultivated parts of the property. Also be aware that the plant is attractive to deer, as this is a natural forage for them. Pretty AND Hardy Crown vetch is a low-growing vine with a creeping stem that grows to less than 2 feet. Pink to rose to lilac pea-like flowers bloom in umbrel form from June through September, depending on region. The plant can be confused with partridge pea (Cassia fasciculata), other native vetches, and non-native pea family relatives. Some of the reasons this plant has naturalized so widely is its high tolerance for many conditions: - Prefers full sun, but will grow in sparse shade - Grows in rocky dry sites as well as moist areas with good drainage, clay and shallow soils - Tolerant of low pH and low fertility - Accepts a wide range of climatic conditions in Zones 3 to 10 - Has few insect predators - Spreads by rhizomes and by seeds - Seeds remain viable in the soil for many years The above list also shows why crown vetch can be a good addition to areas like slopes around buildings or driveways, lawns too steep for mowing, and similar situations, but the plant is a slow starter in initial application, taking from 2 to 5 years to fully establish. It can also be seriously damaged if mowed frequently. The subject of toxicity has a wide range of opinions and studies. Several resources stated that crown vetch posed a threat to horses, but that cattle could ingest it without consequences. Nowhere in the literature was the mention of toxicity to humans. Another source stated that crown vetch grew in a large horse pasture, but that the animals avoided it. Using Crown Vetch in the Landscape If crown vetch is the choice for inclusion in the landscape, it is available by either seed or peat-potted plants; both sources are relatively high in price ($50 for 30 plants/$17 for 1/4 pound seed). Three varieties are available: 'Emerald', 'Penngift', and 'Chemung'. 'Emerald' and 'Chemung' are the most vigorous of the three, and they usually have taller growth. 'Emerald' is suitable for conditions in the Midwest, and 'Chemung' prefers the low fertility areas of the Northeast. Planting crown vetch is more challenging than other species. The seed will not grow unless inoculated with a specific strain of bacteria. The microorganism attaches to the roots of the plant and captures nitrogen, degrades it, then supplies it to the plant. Seeds ordered from commercial companies will include a package of the inoculant and instructions. Proper soil preparation and amendment is crucial to this legume; it will not establish on unlimed soils. Lime and fertilizer are worked into an uneven and rough seedbed; leave rocks, large clods, and even stumps to help stabilize the soil. Good contact with the soil is critical to seed germination. Mulch is recommended as young plants establish. Straw, cheesecloth, used tobacco canvas, woodchips or woodbark would all be adequate. Crown vetch can be incorporated into an area where vegetation is thin and sickly. Scratch the surface with a rake to provide soil contact for the seeds. For detailed instructions and ratios, read this excellent article from the University of Kentucky Department of Agronomy. Peak times to establish crown vetch are mid-February through the end of March, or mid-August through mid-September. Any time in between will stress the seedling through heat, drought, and weed competition. Too late in the season, the plants will not survive the winter. If you have this wildly spreading plant in places you don't want it, it can be easily controlled by pulling the mature plants. If the location is too large for hand-weeding, mowing the plants at the flower stage for 2 to 3 consecutive years might control further spread. Just be sure to mow before the seeds mature. Still discussing small areas of invasives, use of Glyphosate, triclopyr, or metsulfuron will effect control, as long as they are applied during active growth. Remember that triclopyr (Weed-B-Gone, Brush-B-Gone) is selective for broadleaf plants and will not harm adjacent grasses. Glyphosate (Roundup) will kill anything it touches. As always, read the label thoroughly and use good chemical-use practices. For control in larger areas such as pastures, fields, and naturalized prairies, consult your local extension agent for best practices and products. Prices found on Gurneys.com February 2009 University of Missouri Extension. http://extension.missouri.edu University of Kentucky Department of Agronomy. "Slope Stabilization with Crown Vetch", A.J. Powell, Jr. Southeast Exotic Pest Plant Council. http://www.invasive.org The Ohio State University. "Ohio Perennial & Biennial Weed Guide". http://www.oardc.ohio-state.edu/weedguide First 2 photos: Wikimedia Commons, GNU Licenses Third photo courtesy Ohio State Weed Lab Archives, Ohio State University
<urn:uuid:1494b44d-1bcb-4eee-a817-faefb7e18666>
CC-MAIN-2016-26
http://davesgarden.com/guides/articles/view/2255/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941587
1,355
2.515625
3
Advancements in Mobile Health Technology Mobile devices are playing a key role in transforming the efficiency, delivery, and access to the healthcare system. The number of mobile device users who have downloaded mobile healthcare applications nearly doubled, from 127 million in 2011 to 247 million in 2012.1 According to the Deloitte Center for Health Solutions, technological innovations are changing the face of healthcare from “brick-and-mortar” hospital transactions and face-to-face doctor-patient visits to mobile, virtual experiences around the globe.2 Remote monitoring devices have been utilized in several ways, from allowing physicians to remotely access patient medical records and diagnostic test results, to tools used to address substance abuse, smoking cessation, and prevention of chronic diseases in underserved areas.3 For example, glucose monitors and pedometers allow patients to be monitored real time by physicians and puts patients in charge of their own test taking, effectively reducing the need for potentially costly and time consuming office visits. In addition to potential cost savings, a 2009 metaanalysis published in the Journal of the American Medical Association found that mobile devices improved physician response time, accuracy, data management, and record-keeping practices.4 Other potential benefits of mobile health technology include tools to: overcome language barriers; increase patient appointment attendance rates by providing virtual appointment reminders; and, shorten patient emergency room wait time.5 Further studies have shown that mobile healthcare technology has the ability to improve access to medical care in rural areas by allowing physicians to remotely monitor patients.6 Despite the potential benefits that mobile technology advancements appear to exhibit in the medical and public health arenas, the rapid growth and adoption of mobile health technology is not without certain challenges. Many consumers are concerned about confidentiality, privacy and security of medical information that is being transmitted via handheld devices. Further, regulatory changes regarding physician reimbursement and health insurance coverage for the utilization of mobile technology have not kept pace with the adoption of mobile health technology. Additionally, while the Food and Drug Administration (FDA) is responsible for medical device safety and monitoring, it is unclear as to the FDA’s role in the © HEALTH CAPITAL CONSULTANTS regulation of these new medical screening and diagnostic tools.7 Mobile technology is a growing industry, and is set to become a $23 billion dollar worldwide industry by 2017.8 As the opportunities for mobile health applications increase, so does the need for oversight and research into mobile health safety and security, as well as modifications to regulatory and reimbursement policy for healthcare providers utilizing mobile health technology. For example, a recent meta-analysis of studies regarding the effects of mobile technology on patient care, determined that further research is needed to definitively show improvement in sustainable clinical outcomes with use of mobile health technology.9 Accordingly, the challenges and barriers associated with this new innovation will need to expand and be addressed in order to better align regulatory, reimbursement, and policy updates with the utilization of mobile health technology in this emerging area of healthcare reform. 1 2 3 4 5 6 7 8 9 “mHealth in an mWorld: How Mobile Technology is Transforming Health Care,” By Harry Greenspun and Sheryl Coughlin, Deloitte Center for Health Solutions, 2012, p. 12-13. Ibid, p. 5, 14. “How Mobile Devices are Transforming Healthcare,” By Darrell West, Issues in Technology Innovation, No. 18, May 2012, p. 34, 7-8. “The Impact of Mobile Handheld Technology on Hospital Physicians’ Work Practices and Patient Care,” By Mirela Prgomet, Andrew Georgiou and Johanna Westbrook, Journal of the American Medical Informatics Association, Volume 16, No. 6, November/December 2009, p. 799. Darrell West, May 2012, p. 5-6. Ibid. West, May 2012, p. 8-9. “The Effectiveness of Mobile-Health Technologies to Improve Health Care Service Delivery Processes: A Systematic Review and Meta-Analysis,” By Caroline Free et al., PLos Medicine, Vol. 10, Issue 1, January 2013, p. 23; West, May 2012, p. 5, 6. Ibid, p. 8. (Continued on next page) Robert James Cimasi, MHA, ASA, FRICS, MCBA, AVA, CM&AA, serves as Chief Executive Officer of HEALTH CAPITAL CONSULTANTS (HCC), a nationally recognized healthcare financial and economic consulting firm headquartered in St. Louis, MO, serving clients in 49 states since 1993. Mr. Cimasi has over thirty years of experience in serving clients, with a professional focus on the financial and economic aspects of healthcare service sector entities including: valuation consulting and capital formation services; healthcare industry transactions including joint ventures, mergers, acquisitions, and divestitures; litigation support & expert testimony; and, certificate-of-need and other regulatory and policy planning consulting. HCC Home Firm Profile HCC Services HCC Experts Clients Projects HCC News Upcoming Events Contact Us Email Us HEALTH CAPITAL CONSULTANTS (HCC) is an established, nationally recognized healthcare financial and economic consulting firm headquartered in St. Louis, Missouri, with regional personnel nationwide. Founded in 1993, HCC has served clients in over 45 states, in providing services including: valuation in all healthcare sectors; financial analysis, including the development of forecasts, budgets and income distribution plans; healthcare provider related intermediary services, including integration, affiliation, acquisition and divestiture; Certificate of Need (CON) and regulatory consulting; litigation support and expert witness services; and, industry research services for healthcare providers and their advisors. HCC’s accredited professionals are supported by an experienced research and library support staff to maintain a thorough and extensive knowledge of the healthcare reimbursement, regulatory, technological and competitive environment. Mr. Cimasi holds a Masters in Health Administration from the University of Maryland, as well as several professional designations: Accredited Senior Appraiser (ASA – American Society of Appraisers); Fellow Royal Intuition of Chartered Surveyors (FRICS – Royal Institute of Chartered Surveyors); Master Certified Business Appraiser (MCBA – Institute of Business Appraisers); Accredited Valuation Analyst (AVA – National Association of Certified Valuators and Analysts); and, Certified Merger & Acquisition Advisor (CM&AA – Alliance of Merger & Acquisition Advisors). He has served as an expert witness on cases in numerous courts, and has provided testimony before federal and state legislative committees. He is a nationally known speaker on healthcare industry topics, the author of several books, the latest of which include: “Accountable Care Organizations: Value Metrics and Capital Formation” [2013 - Taylor & Francis, a division of CRC Press], “The Adviser’s Guide to Healthcare” – Vols. I, II & III [2010 – AICPA], and “The U.S. Healthcare Certificate of Need Sourcebook” [2005 - Beard Books]. His most recent book, entitled "Healthcare Valuation: The Financial Appraisal of Enterprises, Assets, and Services" will be published by John Wiley & Sons in the Fall of 2013. Mr. Cimasi is the author of numerous additional chapters in anthologies; books, and legal treatises; published articles in peer reviewed and industry trade journals; research papers and case studies; and, is often quoted by healthcare industry press. In 2006, Mr. Cimasi was honored with the prestigious “Shannon Pratt Award in Business Valuation” conferred by the Institute of Business Appraisers. Mr. Cimasi serves on the Editorial Board of the Business Appraisals Practice of the Institute of Business Appraisers, of which he is a member of the College of Fellows. In 2011, he was named a Fellow of the Royal Institution of Chartered Surveyors (RICS). Todd A. Zigrang, MBA, MHA, ASA, FACHE, is the President of HEALTH CAPITAL CONSULTANTS (HCC), where he focuses on the areas valuation and financial analysis for hospitals and other healthcare enterprises. Mr. Zigrang has significant physician integration and financial analysis experience, and has participated in the development of a physician-owned multi-specialty MSO and networks involving a wide range of specialties; physician-owned hospitals, as well as several limited liability companies for the purpose of acquiring acute care and specialty hospitals, ASCs and other ancillary facilities; participated in the evaluation and negotiation of managed care contracts, performed and assisted in the valuation of various healthcare entities and related litigation support engagements; created pro-forma financials; written business plans; conducted a range of industry research; completed due diligence practice analysis; overseen the selection process for vendors, contractors, and architects; and, worked on the arrangement of financing. Mr. Zigrang holds a Master of Science in Health Administration and a Masters in Business Administration from the University of Missouri at Columbia. He is a Fellow of the American College of Healthcare Executives, and serves as President of the St. Louis Chapter of the American Society of Appraisers (ASA). He has co-authored “Research and Financial Benchmarking in the Healthcare Industry” (STP Financial Management) and “Healthcare Industry Research and its Application in Financial Consulting” (Aspen Publishers). He has additionally taught before the Institute of Business Appraisers and CPA Leadership Institute, and has presented healthcare industry valuation related research papers before the Healthcare Financial Management Association; the National CPA Health Care Adviser’s Association; Association for Corporate Growth; Infocast Executive Education Series; the St. Louis Business Valuation Roundtable; and, Physician Hospitals of America. Anne P. Sharamitaro, Esq., is the Executive Vice President & General Counsel of HEALTH CAPITAL CONSULTANTS (HCC), where she focuses on the areas of Certificate of Need (CON); regulatory compliance, managed care, and antitrust consulting. Ms. Sharamitaro is a member of the Missouri Bar and holds a J.D. and Health Law Certificate from Saint Louis University School of Law, where she served as an editor for the Journal of Health Law, published by the American Health Lawyers Association. Ms. Sharamitaro has presented healthcare industry related research papers before Physician Hospitals of America and the National Association of Certified Valuation Analysts and coauthored chapters in “Healthcare Organizations: Financial Management Strategies,” published in 2008. Link or Click Back Here will be a configuration form
<urn:uuid:95431d06-f70a-48ba-b987-1e125291b7f8>
CC-MAIN-2016-26
http://demonomy.com/3421691.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.878525
2,169
3.015625
3
All of the samples studied have a different sequence of mitochondrial DNA HVR I (Table). An analysis of haplotype structure enabled its attribution to five mitochondrial DNA haplogroups: western Eurasian U2e, U5a, T and eastern Eurasian C and A10. A mixed gene pool structure combining mitochondrial DNA groups typical of human populations from western and eastern parts of Eurasia, have been ascertained for all ancient Western-Siberian forest-steppe human populations that we have studied to date (Pilipenko, 2010).The authors identify two components in the population: (i) the "indigenous" mixed population of West Eurasian (U2e+U5a) and East Eurasian (A10+C), and (ii) the intrusive Andronovo (Fedorovka) (T). They also hint about a special article on the autochthony of the A10 lineages in the region. We now seem to have fairly good data about the existence of a wide West/East Eurasian interaction zone from eastern Europe to Siberia, and it would certainly be interesting to see when this zone was first formed; in any case, it seems clear that at least in the central-northern parts of Eurasia admixture between East and West has been going on for a while. The more interesting question is where did the mtDNA haplogroup-T in Fedorovo groups come from? In Europe, for which we have the best data, T makes its appearance with early Neolithic groups, but it's difficult to imagine that this was the source of T in West Siberia. I would not be surprised if the entrance of T into the boreal zone occurred via the Caucasus, although Grigoriev derives them "from the Near East through Iran and Central Asia into the Irtish basin." Ancient DNA reveals the gradual appearance of new players in both Europe and West Siberia, but their ultimate source(s) and migratory paths remains elusive. Archaeology, Ethnology and Anthropology of Eurasia Volume 40, Issue 4, December 2012, Pages 62–69 An Analysis Of Mitochondrial Dna From The Pakhomovskaya Population Of The Late Bronze Age, Western Siberia V.I. Molodin et al. This article presents the results of an analysis of mitochondrial DNA extracted from bone samples from Stary Sad – a burial ground representing the eastern variant of the Late Bronze Age Pakhomovskaya culture in the Baraba forest-steppe, Western Siberia. Comparison with mitochondrial DNA data from earlier populations of the region and also with archaeological facts, points to the origins of the Pakhomovskaya people. Certain components of their gene pool were evidently derived from the local pre-Andronovo populations, others from the actual Andronovo (Fedorovka) population and also from later immigrants. In this article an integrative reconstruction based on biological and cultural facts is proposed.
<urn:uuid:711b0dee-af64-435f-b23e-961f81d8193d>
CC-MAIN-2016-26
http://dienekes.blogspot.com/2013/06/mtdna-from-late-bronze-age-west-siberia.html?showComment=1370962465899
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940236
602
2.8125
3
Date: September 25, 2006 Creator: Wilson, Clay Description: Since October 2001, Improvised Explosive Devices (IEDs, or roadside bombs) have been responsible for many of the more than 2,000 combat deaths in Iraq, and 178 combat deaths in Afghanistan. IEDs are hidden behind signs and guardrails, under roadside debris, or inside animal carcasses, and encounters with these bombs are becoming more numerous and deadly in both Iraq and Afghanistan. Department of Defense (DOD) efforts to counter IEDs have proven only marginally effective, and U.S. forces continue to be exposed to the threat at military checkpoints, or whenever on patrol. IEDs are increasingly being used in Afghanistan, and DOD reportedly is concerned that they might eventually be more widely used by other insurgents and terrorists worldwide. Contributing Partner: UNT Libraries Government Documents Department
<urn:uuid:67bed601-f85b-4f70-9a01-e23c108c14f6>
CC-MAIN-2016-26
http://digital.library.unt.edu/explore/collections/CRSR/browse/?q=%22international+affairs%22&fq=str_location_country%3AAfghanistan&t=dc_subject&fq=str_location_country%3AIraq
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942159
177
2.953125
3
|Skip Navigation Links| |Exit Print View| |man pages section 3: Networking Library Functions Oracle Solaris 11.1 Information Library| - network database functions cc [ flag ... ] file ... -lxnet [ library ... ] #include <netdb.h> void endnetent(void);struct netent *getnetbyaddr(in_addr_t net, int type); struct netent *getnetbyname(const char *name); struct netent *getnetent(void) void setnetent(int stayopen); The getnetbyaddr(), getnetbyname() and getnetent(), functions each return a pointer to a netent structure, the members of which contain the fields of an entry in the network database. The getnetent() function reads the next entry of the database, opening a connection to the database if necessary. The getnetbyaddr() function searches the database from the beginning, and finds the first entry for which the address family specified by type matches the n_addrtype member and the network number net matches the n_net member, opening a connection to the database if necessary. The net argument is the network number in host byte order. The getnetbyname() function searches the database from the beginning and finds the first entry for which the network name specified by name matches the n_name member, opening a connection to the database if necessary. The setnetent() function opens and rewinds the database. If the stayopen argument is non-zero, the connection to the net database will not be closed after each call to getnetent() (either directly, or indirectly through one of the other getnet*( ) functions). The endnetent() function closes the database. The getnetbyaddr(), getnetbyname() and getnetent(), functions may return pointers to static data, which may be overwritten by subsequent calls to any of these functions. These functions are generally used with the Internet address family. On successful completion, getnetbyaddr(), getnetbyname() and getnetent(), return a pointer to a netent structure if the requested entry was found, and a null pointer if the end of the database was reached or the requested entry was not found. Otherwise, a null pointer is returned. No errors are defined. See attributes(5) for descriptions of the following attributes:
<urn:uuid:0a73a90b-d9f9-4b98-aa78-825f352327ba>
CC-MAIN-2016-26
http://docs.oracle.com/cd/E26502_01/html/E29035/setnetent-3xnet.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.714867
505
2.609375
3
What is lenticular ? Imagine that you are shopping and you see such an image in front if you: How long will you look at this poster? Probably 1 second… Now imagine that the poster would look like this: How long will you look at this image? Much more… up to 30 seconds ! If you understand this, you understand the secret of lenticular imaging and you’ll know how to increase tremendously the effectiveness of your next advertising campaign! Click here to see dozens of samples showing how lenticular is bringing success to great marketing ideas. About lenticular imaging The first autostereoscopic image appeared in Europe in the 17th century (Gaspar Antoine de Bois-Clair). After a great deal of evolution, we can say that lenticular imaging was discovered around 1940 by Maurice Bonnet. You will find an interesting article on “Lenticular History” at LenticularTechnology.com. Lenticular is an auto stereoscopic solution – it allows the viewing of a 3D image without the need for special 3D glasses. It is based on a technology in which a lenticular lens is used to separate the left and right eye images, creating the perception of depth in printed images without special glasses. This lenticular effect is activated by moving the image and viewing it from different angles. A lenticular lens is in fact an array of lenses that act like tiny looking glasses. Lenticular prints are made from at least two images, combined (usually called “interlaced”) into a lenticular image. This lenticular process is used to create various frames of animation or to show a set of images that appear to flow into each other. The lenticular interlacing technique used to create these images is now readily available in lenticular software. Modern lenticular technology The most widespread technique for printing on thermoplastic materials is lithographic or offset printing. Offset presses are capable of adjusting the printed image with very high precision, guaranteeing good quality results. The recent technical developments in small format digital printing as with large format and flatbed digital printing have led to tremendous quality improvements. This technology is also now capable of achieving great lenticular prints. The technology improvements are also allowing to print lenticular on flexographic presses. During the last 20 years, lenticular marketing became more and more important, in Europe, USA and the rest of the world. Next to lenticular playing cards or business cards, lenticular rulers and lenticular postcards, also lenticular posters, lenticular billboards or lenticular magazine covers are being used now. Lenticular magazine covers are being used along with lenticular DVD and Blu-ray packaging and posters promoting the great new developments in 3D movies and TV. The improvements in flexographic printing are now also bringing possibilities to produce lenticular labels. More pragmatic uses include lenticular conversion cards, promotion cards and attractive lenticular packaging. Lenticular lenses are delivered as lenticular plastic sheets or lenticular plastic rolls which come in many sizes and different gauges. Besides the printing market, we are supplying more and more high quality lenticular plastic to the lighting industry. Browse our site to find out what lenticular products we have available for you, in Europe, MEA and India.
<urn:uuid:552bcf34-dcad-47f2-bde2-2c5335354b85>
CC-MAIN-2016-26
http://dplenticular.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912671
674
2.71875
3
Perception Of Obesity Varies With Gender by Jennifer Wider, M.D. (Washington DC 5/29/03): A mother's perception of her overweight child can be tainted by gender according to a recent study in the May 2003 issue of the journal Pediatrics. Mothers of heavy-set children are more likely to perceive their daughters as being "overweight" than their sons. This study conducted by the Chronic Disease Nutrition Branch of the Centers for Disease Control and Prevention, in Atlanta, Georgia also found that approximately one-third of mothers finds no problem with their overweight kids, reporting both boys and girls as being at "about the right weight." Researchers analyzed the results of the Third National Health and Nutrition Examination Survey, which included interviews with mothers of 5,500 children aged 2-11 years. They wanted to determine how mothers perceive their child's weight status. After calculating the child's body mass index, (BMI or weight in kilograms divided by height in meters, squared) the researchers asked mothers to classify their children as "overweight," "about the right weight" or "underweight." When the mothers of overweight children were polled, 14% of them reported their sons as "overweight," while 29% classified their daughters as "overweight." While further research is necessary to uncover the reasons for this significant difference, L. Michele Maynard, PhD, lead researcher and epidemiologist at the CDC, offers some potential explanations: "One possible factor may include a difference in societal standards for acceptable body size for males versus females." She also explains that because girls often mature earlier than boys, their mothers may view them as heavier. Much recent attention has been paid to the prevalence of obesity in our society. While this study does not establish direct causality between maternal perception and obesity, it has important implications for young girls at risk of being overweight. The development of eating disorders and poor self-image is caused by a variety of different factors ranging from depression and anxiety to peer pressure and media influences. Several studies have revealed that parents can increase a child's risk of eating disorders and poor body image with negative reinforcement. According to Dr. Maynard: "Parental concern or dissatisfaction with the weight of their child, often leads to lowered self-concepts of the child and reduced participation in physical activities." Medical experts are concerned, however, with the rising number of children affected by obesity and the associated health risks. The number of obese children in the United States continues to rise. According to recent data from the National Center of Health Statistics in Hyattsville, MD, close to 15 percent (roughly 9 million) of children and adolescents ages 6-19 are overweight. This is three times the number of overweight children and teens in 1980. Obese children are at greater risk for diabetes, high blood pressure, high cholesterol, orthopedic ailments and social problems. Parents should play an active role in preventing obesity in their children. "Parents can help their children by encouraging them to eat lower caloric, nutritious foods such as fruits and vegetables rather than foods that have a high caloric content," states Dr. Maynard. Physical activity is vital in preventing excessive weight gain. "Encouragement for healthy eating and physical activity should be provided carefully in order to protect a child's self esteem," according to Maynard. Click here for more information on obesity. The Society for Women's Health Research is the nation's only not-for-profit organization whose sole mission is to improve the health of women through research. Founded in 1990, the Society brought to national attention the need for the appropriate inclusion of women in major medical research studies and the resulting need for more information about conditions affecting women. The Society advocates increased funding for research on women's health, encourages the study of sex differences that may affect the prevention, diagnosis and treatment of disease, and promotes the inclusion of women in medical research studies. Dr. Donnica Moore has been a member of the Society since 1990 and is a past member of its Board of Directors. Created: 5/29/2003  - Jennifer Wider, M.D.
<urn:uuid:a21fc75f-b954-4447-9a46-5bed74d1dc22>
CC-MAIN-2016-26
http://drdonnica.com/news/00006396.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956094
903
3.03125
3
A photographer and author teamed up to capture the geographical, environmental, and historical journey of the Colorado River in their photo-essay book, The Colorado River: Flowing Through Conflict. Peter McBride, a photographer from Colorado, visually documented his aerial expedition along 1,450 miles of the Colorado River, from its headwaters toward its delta. Jonathan Waterman’s text, augmented by his past experiences as a wilderness guide, recounts his own personal travels paddling along the same length of river as well as the history surrounding the waters of the Colorado River. The authors organized the book into three parts, corresponding to the sections of the river as it travels from the Rocky Mountains toward the Sea of Cortez. Their combined intention was to capture the issues facing the river in a photographic record, showing both the beauty and sometimes eerie nature of the Colorado River Basin. The aerial perspective, McBride explained, “shows where we as humans have been, how we connect to the earth, and how nature relates to itself.” McBride began the book by recounting his childhood memories growing up on a Snowmass, Colorado farm near the headwaters of the Colorado River. The introduction to the book, aptly entitled The River, provides a statistical overview of Colorado River, highlighting the more than one-hundred dams obstructing the river’s natural flow. The Colorado River Basin drains 243,000 square miles, spanning seven states and two countries. The river itself supports thirty species of native fish as well as fourteen coal and natural gas power plants, demonstrating the range of reliance on the continuous flow of water. In Part I: The Mountains, the authors describe the beginning of their journey at the Colorado River’s headwaters near the Continental Divide in the Rocky Mountains of Colorado. This section documents the river geographically through the Upper Basin. The river first flows south through Rocky Mountain National Park, then west through Cataract Canyon, where it crosses the border into Utah. The river then winds through the Canyonlands near Moab and spills into Lake Powell. This section also highlights threats to the Upper Basin ecosystem, including impacts of invasive tamarisk and pine beetle on native habitat. A vast number of uranium claims along the Colorado River also pose another potential environmental threat. However, Part I also depicts the many benefits of the river to humans. Recreation activities, especially, sustain the region’s tourism-based economy, including rafting, floating, fishing, and wildlife watching. Part II: Big Reservoirs, Grand Canyon next depicts the Colorado as it flows southwest from Lake Powell toward Lees Ferry. The Colorado River Compact utilized Lees Ferry, a historic river crossing in northern Arizona, as the arbitrary divide between the Upper and Lower Colorado River Basins. The authors’ journey continued on to Lake Mead, the vast reservoir storing water for downstream consumers in Arizona, Nevada, California, and Mexico. The Colorado River slowly travels through the Grand Canyon to Lake Mead, then almost five-hundred miles west to the Hoover Dam. The creation of Grand Canyon National Park in 1919 resulted in formal protection of the landscape. Yet wildlife native to the Colorado continue to face threats to their survival. For example, the humpback chub, a native fish species, adapted to hunt in the shallow, muddy, and warm waters of Little Colorado River. However, deep water held behind the dams of the Lower Basin is colder and clearer which nonnative species prefer, such as trout, which compete with native species for limited food resources. Part III: To the Delta documents the final leg of the authors’ journey of the Colorado River toward the sea. This section maps the river’s flow below the Hoover Dam, through the Black Canyon in California south to Baja California, Mexico. However, the river no longer ends at the delta in the Sea of Cortez, but runs dry about fifty miles north. The river delta itself is 95% diminished. A myriad of water diversions have caused the Colorado River to run dry in the Sonoran desert before it reaches the Sea of Cortez. Agricultural irrigators in the region have diverted much of the river into canals, such as Coachella and All-American. Much of the irrigation runoff in southern California flows into the Salton Sea, over two-hundred feet below sea level. The Salton Sea is an important oasis in the desert, visited by over four-hundred bird species. Yet the Sea’s water level is decreasing six inches each year as more river water flows to major cities, resulting in increased salinity levels which threaten the resident fish and birds that prey upon them. This section summarizes these and other downriver ecological impacts of damming and diverting the river for human uses in southern California and northern Mexico. McBride and Waterman depict their personal expedition along most of the Colorado River through colorful photographs and detailed maps that invoke in the reader both feelings of appreciation and concern for the Colorado River. Waterman’s text skillfully integrates summaries of the natural history and geography of the Colorado River Basin with meaningful quotes. His passages describe anthropogenic impacts to the surrounding ecosystems throughout modern history. McBride captures the river from both the ground and aerial perspectives, providing the reader with beautiful natural images rarely seen. The use of historical photos for comparison with current conditions visually demonstrates the environmental impacts of damming the river on the local landscape. This photo-essay book is much more than a collection of pictures and would do well to complete any collection for a water enthusiast or one who simply enjoys the natural beauty of the Colorado River. Peter McBride & Jonathan Waterman, The Colorado River: Flowing Through Conflict, Westcliffe Publishers, Colorado (2012); 160 pp; $27.95; ISBN 978-1-56579-646-1; soft cover.
<urn:uuid:34edc2f9-1256-4cf1-9b4b-92f701282b0f>
CC-MAIN-2016-26
http://duwaterlawreview.com/colorado-river-flowing-thru-conflict/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934302
1,184
2.984375
3
We have recently upgraded to BlackBoard 9.1 and we have taken the opportunity to develop some pedagogical templates to help teachers create courses based around different pedagogical models. We have developed four to date: - A calendar-based approach - A topic-based approach - A project/case study based approach - A problem-based learning approach I was tasked with developing a template for the problem-based learning approach. What is it? This structure is useful for inquiry-based modules, where students find or explore materials/activities to investigate/solve the problems or cases. It is particularly useful for Science courses, where the students focus on a problem that needs investigating. It is a good example of a constructivist approach to learning. The structure is based around starting with the problem to be solved, which is usually in the form of a question. Students are provided with advice on how to tackle the problem and given suggestions of resources to investigate. The problem can be tackled individually or in groups. The jigsaw pedagogical pattern is a good way of structuring a group-based activity. In this a group of 4 students are given different aspects of the problem to investigate. All the students looking at one aspect of the problem then get together with other students in other groups to share their findings. Then they return to their home team and share their collective understanding. What does it look like? The interdisciplinary iScience BSc at the University of Leicester is based on Problem-Based Learning. Each module begins with a key Science problem, such as comparing whether humans can run as fast as machines or issues around ecology and climate change. The students are presented with the problem and provided with advice on how to tackle it. Each topic addresses at least two discipline perspectives, including Physics, Chemistry, Biology and Ecology. In addition, they are provided with support on any competences they need to develop such as Mathematical or computing skills. Below are a few screenshots from the iScience course. The course is organised into folders around a series of substantive interdisciplinary topics; each topic folder then contains links to relevant resources, expert sessions, group allocation (much of the work on the course is group based), brainstorming documents and any quizzes. Finally, there are a series of sub-folders articulating the key problem that the students are expected to investigate. For example, for the ‘Near Space’ module there is a link to a pdf, which begins with the question: ‘What information regarding glaciation on Mars (and other planets) can we gain from study of glaciers on Earth?’ The document then goes on to articulate relevant information and provides the students with suggestions for how to go about their research. Figure 1: A screenshot of the first level of folders for the course Figure 2: Screenshot of the folders in the Science of the Invisible topic Figure 3: Screenshot of the documentation associatted with the frozen worlds topic
<urn:uuid:03f40dfa-5184-476f-a524-b2c04177b5b7>
CC-MAIN-2016-26
http://e4innovation.com/?p=599
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947842
605
3.015625
3
These photographs from the International Space Station show two pieces of a massive iceberg that broke off from the Ronne Ice Shelf in October 1998. Taken on January 6, 2004, the pieces of iceberg A-38 have floated relatively close to South Georgia Island. After 5 years and 3 months adrift, they are approximately 1,500 nautical miles from their origin. In the oblique image, taken a few minutes later, the cloud pattern is indicative of the impact of the mountainous islands on the local wind field. At this time, the icebergs are sheltered in the lee side of the island. When the mass first broke away from the Antarctic Ronne Ice Shelf into the Weddell Sea, it was more than 90 miles long and 30 miles wide and was one of the largest reported icebergs in more than a decade. By the end of October 1998, the iceberg, A-38, began to break-up. Today A-38A is still longer than 40 nautical miles, and A-38B is more than 25 nautical miles long. When ice shelves break up, it is common to ask, “Was the calving related to global climate change?” Dr. Ted Scambos, a scientist at the National Snow and Ice Data Center, compared the calving of Iceberg A-38 to events on the Larsen Ice Shelf and concluded that, “in contrast to what is going on in the northern reaches of the Antarctic Peninsula, the A-38 iceberg calving event on the Ronne Ice Shelf is unlikely to be climate-related.” Over a fifty-year period, the shelf has expanded and contracted, and the A-38 berg actually brought the ice shelf front back to the location it was when first mapped in 1957-58. Positions and sizes of Antarctic Icebergs are reported by the National Ice Center. Both photographs were taken from the International Space Station using a Kodak DCS760 digital camera and a 50-mm lens on January 6, 2004. ISS008-E-12107 was taken first, and ISS008-E-12109 was taken 2 minutes and 37 seconds later. Details provided by Susan Runco, Earth Observations Laboratory, Johnson Space Center. The International Space Station Program supports the laboratory to help astronauts take pictures of Earth that will be of the greatest value to scientists and the public, and to make those images freely available on the Internet. Additional images taken by astronauts and cosmonauts can be viewed at the NASA/JSC Gateway to Astronaut Photography of Earth. - ISS - Digital Camera
<urn:uuid:b874cdbc-a4ec-4c60-b086-b2360c61358a>
CC-MAIN-2016-26
http://earthobservatory.nasa.gov/IOTD/view.php?id=4166&eocn=image&eoci=moreiotd
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944445
533
3.5
4
A cold front pushed eastward across the continental United States in early 2013, passing through Colorado on January 11. Ahead of the cold front, a dust storm arose along the Colorado-Kansas border. The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite captured this natural-color image on January 11, 2013. Although the dust was thickest in western Kansas, many of the source points for the storm were in Colorado. One dust plume arose roughly 70 kilometers (40 miles) south of Colorado Springs. In Kansas, the eastern edge of the dust storm spanned 240 kilometers (150 miles) and the dust was thick enough to completely hide the land surface below, especially east of Goodland. Salina.com reported that the blowing dust reduced visibility to a quarter of a mile (0.4 kilometers). Dust storms in this region have occurred in the midst of severe, lingering drought. As of January 8, 2013, the U.S. Drought Monitor described drought conditions in western Kansas and southeastern Colorado as “exceptional.” A smaller dust storm struck the same region in November 2012. - Demuth, G. (2013, January 11) Dust storm reduces visibility in northwest Kansas. Salina.com. Accessed January 14, 2013. - U.S. Drought Monitor. (2012, November 13) Current conditions. University of Nebraska-Lincoln. Accessed January 14, 2013. NASA image courtesy Jeff Schmaltz, LANCE MODIS Rapid Response. Caption by Michon Scott. - Aqua - MODIS
<urn:uuid:8d8ef911-72da-43a7-b467-8fc1f96752d8>
CC-MAIN-2016-26
http://earthobservatory.nasa.gov/NaturalHazards/view.php?id=80164&src=nhrss
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.914887
325
3.953125
4
At the beginning of the 2013 legislative session, there were efforts in several states to roll back state renewable energy generation standards. Those efforts appear to have failed. A biofuel project, if approved by the Colorado Public Utilities Commission, would turn excess biomass and beetle-killed timber into electricity. The Colorado legislature passed six different bills supporting electric vehicles in the 2013 session, boosting the state to an A- ranking in a report card study. Several federal Colorado lawmakers introduce a bipartisan bill to streamline the permit process for renewable energy projects on public lands. Colorado Energy News reports on how one county in that state now requires all new homes be wired for electric vehicles and solar panels. Colorado Energy News reports on how Denver, Colorado is the first municipality to be recognized as a Solar Friendly Community by the DOE. Colorado Energy News writes that you could describe the federal government’s approach to energy development on public lands in the West as hypocritical, and you wouldn’t be off the mark.
<urn:uuid:f3671888-3768-441f-a3d7-20ac7e219e09>
CC-MAIN-2016-26
http://earthtechling.com/author/colorado-energy-news/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93097
202
2.609375
3
A new study finds that black patients tend to see physicians who are less qualified and have fewer contacts for high-quality tertiary care institutions: ''When black patients go to the doctor, they're more likely to be treated by a doctor who can't harness the full capabilities of the health care system," said Dr. Peter B. Bach, an epidemiologist at Memorial Sloan-Kettering Cancer Center in New York who was the lead author of the study in the New England Journal of Medicine. Examining patterns of office visits by black and white patients on Medicare, the government health insurance program for the elderly, the study found that most blacks were treated by a subset of doctors who had less training than doctors who treated whites, and who told interviewers that they were frequently unable to provide high-quality care. These doctors, of all races, were less likely than doctors who mostly treated white patients to have passed exams showing mastery of a primary care specialty. They were more likely to report that they could not always help their patients get treatment from specialists, diagnostic imaging such as MRIs, or admission to the hospital when it wasn't an emergency. These differences remained even after the researchers took into account patients' insurance status. The researchers also found out that similar problems beset other physicians in the same geographical areas. In other words, the access to good quality care is reduced for blacks because of racial segregation in housing: blacks tend to live in areas with lower incomes and with fewer highly-qualified physicians. It's easy to see why lower income areas would not attract large numbers of skilled practitioners; the money just isn't there. But this is something that could be changed by government policies. Access to health care has shown a racial difference for a long time. This is partly why black health indicators are consistently lower, too, with higher death rates from cancer and heart disease and a much lower average life expectancy. But equalizing all access measures would not equalize the health outcomes: This would require us to do something about the greater deaths from violence among blacks as well as about the lower average incomes of blacks.
<urn:uuid:9018993b-6d2b-444a-9228-c31c38648d81>
CC-MAIN-2016-26
http://echidneofthesnakes.blogspot.com/2004/08/on-health-care-access-for-black.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983052
427
2.90625
3
While not every painting tells a story, those that do must find a way to convey a narrative—events that happen in time—in just one, static image. How do artists create a story that provides a message or provokes emotions in that single frame? This lesson will help students analyze ways in which the composition of a painting contributes to telling the story or conveying the message through the placement of objects and images within the painting. In this activity, students will be shown one of the ways in which a painting's composition plays a role in telling the image's story. Begin by having the class view: Students who have completed the first lesson of the EDSITEment curriculum unit: Everything in its Right Place: An Introduction to Composition in the Visual Arts will find this image familiar. Have students draw the compositional shape of Carracci's painting on the line drawing of this image available from the Student LaunchPad. Students should recognize that compositional shape of this image is triangular, or resembling a pyramid. Then ask students the following questions about the compositional shape of Carracci's painting: Then have them consider what is going on in Carracci's painting: What is the "story?" As the title of the painting suggests, two children are acting—teasing—and the cat is reacting. In drawing their compositional triangles, students will probably notice that the triangle is not centered on the canvas, but is placed to the right side. The boy sits at the apex of the triangle, leaning over the cat, while the girl leans in from the left corner of the triangle towards the animal. The right hand corner of the triangle—where the cat is placed—falls off the edge of the canvas. While the cat is flush against the right side of the canvas, the empty space to the girl's left emphasizes the closed and cut off right-hand corner of the compositional triangle, compressing the cat. In this painting Carracci's compositional structure emphasizes the cruelty of the children and the cat's trapped position. Ask students to think about what kinds of emotions this composition evokes as they answer the following questions: Students should identify the moment in the "story" of the two children teasing the cat as the climax: the children are using the scorpions to tease the cat and the animal has yet to react. Without yet knowing what will happen—will the cat continue to tolerate their teasing? Will it lash out at the children?—the story is left at its most tense point. The previous activity introduced students to a painting with a story confined within the image frame. The next exercise introduces students to a painting based on a story that originated in a text, rather than in the imagination of the painter. Have students view: What is the focal point of this image? Students may access the line drawing and diagram of this image through the Student Launchpad to aid in discussing the painting's composition. Students will notice that there are no figures placed in the center of the piece. Instead, that space is taken up by a dark gallery set in the distance. Working individually or in small groups, students may diagram the composition of this image, available as a JPEG print out from the Student LaunchPad. What is the painting's composition? Triangular? Oval? Another shape? Students may notice that the figures in the painting form an uneven figure eight. Yet the focal point does not fall on the center of the figure eight. Have students work together to create hypotheses to answer the following question: It might help students to know and think about the story that is the basis for the painting. Esther has heard that her people are to be massacred. Desperate to save them, she requests to speak with her husband, the king, at a time and place deemed inappropriate. The violation of the rules of appropriate interaction between Esther and Ahasuerus could result in Esther's execution. At the moment in the story pictured by Gentileschi, King Ahasuerus has just granted Esther the opportunity to speak and she has fainted in relief that she will be heard and will not be put to death for impertinence. When she regains consciousness she will successfully plead for Ahasuerus to stop the massacre of the Persian Jews. Have students work together in the same small groups to answer the following questions: Gentileschi may have chosen to structure the composition of the painting in this way in order to to heighten the drama of the scene. By visually separating the figures of Ahasuerus and Esther, Gentileschi inserted a marker of the distance between the two figures that acts as a reminder of the violation of custom that characterizes Esther's request to speak. The space between the husband and wife marked by the empty center of the canvas might be a means Gentileschi used to emphasize Esther's courageous decision to risk her life to save her people. In this activity, students will examine a painting whose vague story was created to convey the painter's message—his ideas about modern, urban life. Begin by having students view: Divide the class into small groups of three to four students to work together analyzing the painting. Students may access the line drawing of this image through the Student Launchpad to aid in discussing Kirchner's painting. Unlike the Gentileschi painting, Kirchner's image is not based on a text or a known story. Instead, it appears to be a scene taken from daily life in which Kirchner is simply showing street life of Dresden at the turn of the century. One could imagine that there are a number of stories contained within this image: For example, what is the action taking place between the little girl pictured at the center of the painting and the Is she pulling away from that hand? Is it that should would like to play, or is it that she fears the person holding her? Although it may be possible to imagine a number of stories for the "figures" that appear in Kirchner's painting, this work is unlike either the Carracci or the Gentileschi paintings, telling a specific story does not appear to be Kirchner's purpose. What is Kirchner trying to convey, if not a particular story? Students will try to find clues to answer this question by examining the composition of the piece. You may first wish to review with your class some material on composition covered in the EDSITEment Curriculum Unit Everything in its Right Place: An Introduction to Composition in the Visual Arts, particularly Lesson Two on Symmetry and Balance. Working in the same small groups, ask students the following questions: Students should note that it is difficult to discern an overall compositional shape in Kirchner's painting. They should also note that the composition is not symmetrical. Investigating the elements that answer these two questions will provide the basis for students to discuss balance in the work. Ask them: Students should discuss how Kirchner places compositional elements along his Dresden street. The painting is visually divided, roughly at the center of the canvas, between the pink pavement and the distant people crowded together on the left hand side of the painting, and the busy group of pedestrians clustered on the right hand side of the painting. Weight between the two sides is not evenly distributed, and this effect is represented in other ways, including differences in proportion or size. Figures on the right side of the painting appear much closer to the viewer, as they are proportionately larger and thus convey a sense of nearness. Significantly, figures on the right side also face the viewer, while those on the left side do not (the little girl with the large hat stands at the axis). Typically, just as our attention is drawn in a room full of people towards those whose faces we can see because they give us more information about the occasion and their mood, and consequently, our response to them, in works of art, faces also draw the viewers' eyes towards them. On the right side, the inclusion of faces gives a sense of identity to some of the figures, while those on the left are indistinguishable. That sense of identity lends weight to the right side of the canvas. In creating this painting Kirchner made a conscious decision to distribute the "weight" of the composition unevenly. Thus the painting presents asymmetrical balance. As they continue to work in their small groups, ask students to construct an hypothesis to answer the following question: Students might begin by considering the effect Kirchner achieved by composing his painting in this way. What tone or mood permeates Street, Dresden? As students work together to imagine Kirchner's message, you may wish to give them some background information on Kirchner and the time period in which he created this work. You may want to give students information in pieces, allowing them time to assimilate that information before transmitting the next piece. You may also want to share further information on Die Brücke movement accessible through Art Safari. Even the figures' dress becomes symbolic not only of urban life and the alienation of the contemporary world, but also of bourgeois values. Each figure is dressed in the well-tailored clothing of the burgeoning middle class, costumed with petticoats, purses, fitted jackets, and feathered hats. As a piece that is meant as a critique of the current social order, Kirchner purposely chose to structure the painting the way he did in order to make his point: A slightly off-kilter composition for a slightly off-kilter world. Have students write a brief explanation of the different kinds of stories that each of the three paintings they examined in this lesson tell, and how the artists used compositional elements to convey narratives and messages. Students should be sure to compare and contrast the three paintings, examining the similarities and differences in the kinds of stories and the way that the artists have conveyed those stories. Have students choose one of the following four images, all available from The Smithsonian American Art Museum or the National Gallery of Art: Students should read any information that the museum provides with the image and then view the image closely and make notes on the composition. Then they should write a page describing how the composition contributes to and enhances the telling of the painting's story. Your students may want to delve deeper into the relationship between composition and content through a research project. For this assignment your students should find an image depicting a familiar story using the following EDSITEment-reviewed museum websites: Once they have identified an image, they should conduct research on both the painting itself and on the story depicted to uncover additional or unfamiliar aspects. After gathering this information, they may write a short essay explaining how the painting successfully conveys a subtle aspect of the story through the use of composition. They should focus on something that is not immediately obvious. Instead, they should seek elements of the composition which heighten the mood or drama of the image specifically through the placement of objects and figures within the picture frame. 1-2 class periods
<urn:uuid:e5518438-c0c1-4e8b-8257-dbaa184b13f6>
CC-MAIN-2016-26
http://edsitement.neh.gov/lesson-plan/composition-and-content-visual-arts
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948713
2,249
4.09375
4
Scholars from the University of Chicago developed, and master teachers tested, this resource to provide an overview of Middle Eastern cultures and their contributions to the world. Ancient Greeks/Modern Lives aims to inspire people to come together to read, see, and think about classical literature and how it continues to influence and invigorate American cultural life. Crafting Freedom Materials is a comprehensive NEH-funded resource on the African-American experience during the antebellum period. For teachers of social studies, language arts, and other humanities subjects. The words of the King James Bible ring out today in books, poems, popular songs, speeches, and sermons. Visit Manifold Greatness for the story of one of the most widely read books in the English language. The NEH Created Equal initiative uses the power of documentary films to encourage public conversations about the changing meanings of freedom and equality in America. The five films that are part of this project tell the remarkable stories of individuals who challenged the social and legal status quo, from slavery to segregation. Virtual_Oaxaca is a virtual representation of Oaxaca, the city, surrounding archeological sites, and arts communities. Created by teachers in an NEH-funded Summer Institute. Plan a lesson, watch a video, and peek at Oaxaca on Second Life. More to come! Read historical fiction stories that illuminate Chicago's past. Use the Interactive History Map to look closer at artifacts from the collection of the Chicago History Museum and to explore locations throughout the city from each story. Build further on your experience with classroom activities. Picturing United States History, an NEH-funded project is based on the belief that visual materials are vital to understanding the American past. The website provides online "Lessons in Looking," a guide to Web resources, forums, essays, reviews, and classroom activities to help teachers incorporate visual evidence into the classroom. The site also serves as a clearing house for incorporating visual documents into their U.S. history, American studies, literature, and other humanities courses. NEH funded online archive of educational resources on the history of natural law, natural rights and American Constitutionalism designed and written by scholars associated with the James Madison Program in American Ideals and Institutions at Princeton University. The National Endowment for the Humanities (NEH) marked the 10th anniversary of the tragedy of September 11th with a series of events and opportunities for remembrance and reflection across the country. Making The Wright Connection is an online community of, and clearinghouse for, scholars and teachers of the works of Richard Wright (1908-1960), the author of such major works as Uncle Tom’s Children, Native Son, and Black Boy. Website includes podcasts of lectures by some of the world’s foremost scholars of Wright. This site highlights recent research of scholars who have provided new insights about the cultures and histories of Indian peoples in the Midwest. The products of this NEH-funded Summer Institute for School Teachers offers a wealth of curricular plans and interactive ideas for the classroom. Topics cover a variety of disciplines: history, geography, literature, religion, art, and environmental studies for every grade level. The Stalin Project is a multi-media, interactive resource about Stalin and the Soviet people. This site includes text written by the top scholars in the field, a database of over 500 images, primary source documents, videos, lesson plans, and other interactive material. Picturing Hawai'i is a new curriculum from the Honolulu Museum of Art. The comprehensive Teacher Resource Book and the accompanying six images show you how to use works from the Museum's collection to supplement your lessons in history, fine arts, language arts, math, and science. Housed within the History Department at the University of Wisconsin-Madison, CSAC does research and publishes materials relating to the creation and ratification of the American Constitution. During the American Civil War, this battle catapulted Thomas J. "Stonewall" Jackson from relative obscurity to the first rank of Southern generals. Explore this interactive map of the second half of the campaign. Hosted by the Encyclopedia of Virginia. Image: General "Stonewall" Jackson. Virginia Historical Society, William Garl Brown, ca. 1865–1900 The Seven Days’ Battles, June 25–July 1, 1862, the decisive engagements of McClellan's Peninsula Campaign included Confederate Balloon Reconnaissance. "Jeb" Stuart became legend by executing his famous “Ride around McClellan.” Hosted by the Encyclopedia of Virginia. A resource set developed by The Education Department at Mystic Seaport for the “Year of the Charles Morgan,” commemorating the re-launch of the Charles W. Morgan, the only remaining wooden whaling ship in existence. The materials and features found in this resource set contain primary source material and other content related to the Morgan and whaling. For more whaling resources, visit the Whaling Resource Set. Ohioana Authors, supported in part by the Ohio Humanities Council, celebrates Ohio’s rich literary and historical heritage and Ohio’s contribution to American culture through the written word. A resource developed from NEH Summer Institutes held at Salem State University exploring early American art and culture. The website assists teachers of American history, literature, art, geography, social studies, American studies, and other fields who wish to incorporate American art into their classrooms. It includes podcasts, unit plans, and print and electronic bibliographies. This National Endowment for the Humanities Summer Institute for Teachers held at the Steinbeck Institute, San Jose State University, contains a rich collection of scholarly essays, lesson plans, maps, and images covering Steinbeck's work and his world. This resource site for early American history features a constantly growing digital collection of primary sources — print and manuscript documents, as well as images — and transcribed versions of these materials from various libraries and archives. It also includes a host of K–12 teaching resources including timeline, interactive primary sources and lesson plans. Race—Are We So Different? is a project of the American Anthropological Association. A traveling exhibit and website, it looks through the eyes of history, science, and lived experience to explain differences among people and reveal the reality—and unreality—of race. The site contains a virtual tour of the exhibit, resources for middle and high school teachers, STEM resources, and a robust American history section with an interactive timeline. Seventeen Moments in Soviet History contains a rich archive of texts, images, maps, and audio and video materials from the Soviet era (1917–1991). The materials are arranged by year and by subject, are fully searchable, and are translated into English. Students, educators, and scholars will find materials about Soviet propaganda, politics, economics, society, crime, literature, art, dissidents, and hundreds of other topics. Seven visual essays presented in video casts designed to make art from Muslim societies an integral part of the Muslim Journeys experience. The Art Spots were written and presented by D. Fairchild Ruggles, Professor of Art, Architecture, and Landscape History, University of Illinois, Urbana-Champaign, and produced by Twin Cities Public Television. A multi-media educational project, “Nature, Culture, and History at the Grand Canyon,” includes an interactive website and DVD, digital audio-tour, walking tour brochure, and educational resources for K–20 teachers including Travelin’ Trunks and Lesson Plans. National History Day makes history come alive for America's youth by engaging them in the discovery of the historic, cultural, and social experiences of the past and inspiring them through exciting competitions and transforms teaching through project-based curriculum and instruction. TeachingFlorida.org is designed to bring the study of Florida into the classrooms of our state. Created by the Florida Humanities Council, it combines the scholarship of distinguished humanities scholars with ideas and lesson plans from Florida teachers. Produced by the American Social History Project, City University of New York, and funded through NEH's Summer Seminars Program, this resource provides multimedia presentations by historians, art historians, and archivists that are accompanied by archival images; primary documents illuminating aspects of the subject; and a bibliography of books, articles, and online resources. The National Archives and The University of Virginia Press developed this online resource with historical documents of the founders of the United States of America. Through this website, you will be able to read and search through thousands of records from George Washington, Benjamin Franklin, Alexander Hamilton, John Adams, Thomas Jefferson, and James Madison and see firsthand the growth of democracy and the birth of the Republic. The result of an NEH-funded Summer Institute for School Teachers at the Mississippi Valley Archaeology Center, this site displays a rich array of humanities and STEM teacher-created lesson plans and teacher development materials from the elementary through the high school level covering archaeology, anthropology, biology, botany, geology, mathematics, and more. This site is the product of the Religious Worlds institute, a project of the Interfaith Center of New York and Union Theological Seminary, with support from the NEH. The site offers an array of lesson plans, curriculum idea, and professional development based on NEH Summer Institutes for School Teachers that delve into the doctrines of the world's major religions and encourage academically grounded engagement with the social realities of contemporary religious communities. Distinguished historian Gordon Wood, in conversation with President of Gilder Lehrman Jim Basker, discuss the idea of America. Offered through the Social History Project at City University of New York, this special feature of the NEH-funded Picturing History website, contains targeted videos, lectures, and a wealth of visual and textual primary source material on Civil War subjects for the classroom. Explore historical maps, discover stories you never knew, find people and historical events related to the Mall's past. This collaborative production of the college teacher-participants in a 2011 NEH summer humanities institute at the Folger Shakespeare Library models various approaches, contexts, and resources. Collectively, this sampler of participant themes with applications for teaching, faculty video clips, and annotated bibliographies provides exciting new materials for teaching and research. MIT’s HyperStudio Lab for Digital Humanities’ investigative experience into decisions faced on the eve of the American Revolution by Boston’s Old North Church congregation in 1775. A collaborative production of the college teacher-participants in a 2011 NEH summer humanities institute at the Folger Shakespeare Library. Over the course of five weeks, and with the guidance of faculty experts, the institute explored the historical developments through which the hyperbolic ambition signaled by the name of Shakespeare’s theatre became a reality. Standing Together is an NEH initiative to promote understanding of the military experience and to support returning veterans through films, literature, drama, discussion, and more. The Emily Dickinson Museum in Amherst, Massachusetts, includes The Homestead, where the poet was born and lived most of her life, and The Evergreens, home of the poet’s brother and his family. It has been the site of several NEH Landmarks of American History and Culture Workshop for Schoolteachers. Teacher Resource page includes a number of curriculum projects from NEH Schoolteacher Summer Scholars. This Massachusetts Humanities website provides teachers and middle-school students with the opportunity to engage in a range of philosophical discussions. This secondary-level curriculum packet, produced in connection with the State House Women’s Leadership Project and developed by Massachusetts Humanities and the Tsongas Industrial History Center at UMass/Lowell, focuses on two of the six State House honorees: Lucy Stone (1818–1893) and Sarah Parker Remond (1824–1894) and includes websites, and other resources that can be used for teaching about the struggle for equality (The Teacher’s Guide, Primary Source Documents, Resource Guide and “HEAR US” virtual tour.) This NEH-funded archive based at University of Nebraska–Lincoln Center for Digital Research in the Humanities traces the growth of railroads, telegraphs, and steam ships from 1850 to 1900 and the dynamic social change they brought to America. The website includes primary sources and teaching materials. The Wadsworth Atheneum Museum of Art, the oldest continually-operating public art museum in the United States, has experienced an extensive renovation funded in part by NEH. Major exhibitions and newly refurbished collections offer new interpretive content and deeper engagement with the artwork. An online collection of educational resources provide creative strategies for effectively addressing student learning objectives through the visual arts. This site hosts a library of virtual artifacts, education curricula, and museum exhibits (forthcoming). These programs are designed to foster research and study about the historical experiences of people with disabilities and their communities. Explore the fascinating origins of the Bible and its eventful history. On Bible Odyssey, the world’s leading scholars share the latest historical and literary research on key people, places, and passages of the Bible A New Nation Votes is a searchable collection of election returns from the earliest years of American democracy. The data were compiled by Philip Lampi. The American Antiquarian Society and Tufts University Digital Collections and Archives have mounted it online for you with funding from the National Endowment for the Humanities. A multifaceted exploration of the people, places, as well as passages of the Bible is enriched by searchable themes, a timeline, glossary, and much more, including completely searchable texts for three English versions. This digital collection from the University of Florida George A. Smathers Libraries offers resources for research into a variety of historical children’s literature, including comparative editions of classic texts.
<urn:uuid:132e2734-bea1-4fc8-afc9-32a6e9cd4548>
CC-MAIN-2016-26
http://edsitement.neh.gov/neh-connections/websites-teachers-and-learners
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919787
2,842
3.375
3
The Aging Network To meet the diverse needs of the growing numbers of older persons in the United States, the Older Americans Act of 1965 (OAA), as amended, created the primary vehicle for organizing, coordinating and providing community-based services and opportunities for older Americans and their families. All individuals 60 years of age and older are eligible for services under the OAA, although priority attention is given to those who are in greatest need. The OAA established a national network of federal, state, and local agencies to plan and provide services that enable older adults to live independently in their homes and community. This interconnected structure of agencies is known as the National Aging Network. The National Aging Network is headed by the U.S. Administration on Aging. The network includes 56 State Agencies on Aging, 629 Area Agencies on Aging, 246 Native American aging programs, over 29,000 service providers, and thousands of volunteers. |Last Modified: 9/20/2010 3:03:06 PM
<urn:uuid:3a32d238-a4a1-462f-a675-71e382052e7e>
CC-MAIN-2016-26
http://eldercare.gov/Eldercare.NET/Public/About/Aging_Network/Index.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943345
203
2.796875
3
Hesperian Health Guides Barrier methods of family planning Barrier methods prevent pregnancy by blocking the sperm from reaching the egg. They do not change the way the woman’s or man’s body works, and they cause very few side effects. Barrier methods are safe if a woman is breastfeeding. Most of these methods also protect against STIs, including HIV. When a woman wants to become pregnant, she simply stops using the barrier method. The most common barrier methods are the condom, condoms for women, the diaphragm, and spermicides. If a condom breaks or comes off the penis, the woman should put spermicide in her vagina immediately. If possible, use emergency family planning. The condom is a narrow bag of thin rubber that the man wears on his penis during sex. Because the man’s semen stays in the bag, the sperm cannot enter the woman’s body. Condoms are the best protection against STIs and HIV. They can be used alone or along with any other family planning method. Condoms can be bought at many pharmacies and markets, and are often available at health posts and through AIDS prevention programs. Be careful not to tear the condom as you open the package. Do not use a new condom if the package is torn or dried out, or if the condom is stiff or sticky. The condom will not work. The condom must be put on the man’s penis when it is hard, but before it touches the woman’s genitals. If he rubs his penis on the woman’s genitals or goes into her vagina, he can make the woman pregnant or can give her an STI, even if he does not spill his sperm (ejaculate). How to use a condom: 1. If the man is not circumcised, pull the foreskin back. Squeeze the tip of the condom and put it on the end of the hard penis. |2. Keep squeezing the tip while unrolling the condom, until it covers all of the penis. The loose part at the end will hold the man’s sperm. If you do not leave space for the sperm when it comes out, the condom is more likely to break.| |3. After the man ejaculates, he should hold on to the rim of the condom and withdraw from the vagina while his penis is still hard. ||4. Take off the condom. Do not let sperm spill or leak. ||5. Tie the condom shut and dispose of it away from children and animals.| A woman who is using another family planning method should also use condoms if she needs STI protection. - Use a condom every time you have sex. - If possible, always use condoms made of latex. They give the best protection against HIV. Condoms made of sheepskin or lambskin may not protect against HIV. - Keep condoms in a cool, dry place away from sunlight. Condoms from old or torn packages are more likely to break. - Use a condom only once. A condom that has been used before is more likely to break. - Keep condoms within reach. You are less likely to use them if you have to stop what you are doing to look for them. More Informationencouraging your partner to use condoms At first, many couples do not like to use condoms. But once they get used to it, they may even recognize benefits besides protecting against unwanted pregnancies and STIs. For example, condoms can help some men last longer before they come. The condom for women (female condoms) Female condoms are larger than condoms made for men and are less likely to break. They work best when the man is on top and the woman is on the bottom during sex. A female condom, which fits into the vagina and covers the outer lips of the vulva, can be put in the vagina any time before sex. It should be used only once, because it may break if it is reused. But if you do not have any other condoms, you can clean it and reuse it up to 5 times. The female condom should not be used with a male condom. The female condom is the most effective of the methods controlled by women in protecting against both pregnancy and STIs, including HIV. There are now 3 types of female condom available. The newest are less expensive. The VA female condom fits more closely to the woman’s body, so it is more comfortable and makes less noise during sex. Female condoms are available only in a few places now. But if enough people demand this method, more programs will make them available. How to use the female condom: |1. Carefully open |2. Find the inner ring, which is at the closed end of the condom.|| |3. Squeeze the inner |4. Put the inner ring in the vagina. |5. Push the inner ring up into your vagina with your finger. The outer ring stays outside the vagina.| |6. When you have sex, guide the penis through the outer ring.||7. Remove the female condom immediately after sex, before you stand up. Squeeze and twist the outer ring to keep the man’s sperm inside the pouch. Pull the pouch out gently, and then dispose of it out of reach of children and animals.| When a diaphragm is used correctly, it prevents pregnancy most of the time and may also give some protection against STIs. The diaphragm is a shallow cup made of soft rubber or thin silicone that a woman wears in her vagina during sex. The diaphragm covers the cervix so that the man’s sperm cannot get into her womb. The diaphragm should be used with spermicide. If you do not have spermicide, you can still use the diaphragm, but it may not work as well to prevent pregnancy. Diaphragms come in different sizes, and are available at some health posts and family planning clinics. A health worker who has been trained to do pelvic exams can examine you and find the right size diaphragm. Diaphragms can get holes, particularly after being used for more than a year. It is a good idea to check your diaphragm often. Replace it when the rubber gets dry or hard, or when there is a hole in it. You can put the diaphragm in just before you have sex or up to 6 hours before. If you have sex more than one time after you put the diaphragm in, put more spermicide in your vagina each time before you have sex, without removing the diaphragm. How to use a diaphragm: |1. If you have spermicide, squeeze it into the center. Then spread a little bit around the edge with your finger.||2. Squeeze the diaphragm in half.||3. Open the lips of your vagina with your other hand. Push the diaphragm into your vagina. It works best if you push it toward your back.| |4. Check the position of your diaphragm by putting one of your fingers inside your vagina and feeling for your cervix through the rubber of the diaphragm. The cervix feels firm, like the end of your nose. The diaphragm must cover your cervix.| |5. If the diaphragm is in the right place, you will not be able to feel it inside you.||6. Leave the diaphragm in place for at least 6 hours after sex.| You can leave the diaphragm in for up to 24 hours. It is OK to use the diaphragm during monthly bleeding, but you will need to remove it and clean it as often as you would change a cloth or pad. To remove the diaphragm: Put your finger inside your vagina. Reach behind the front rim of the diaphragm and pull it down and out. Wash your diaphragm with soap and water, and dry it. Check the diaphragm for holes by holding it up to the light. If there is even a tiny hole, get a new one. Store the diaphragm in a clean, dry place. |Cream or Jelly| (contraceptive foam, tablets, jelly, or cream) Spermicide comes in many forms—foam, tablets, and cream or jelly—and is put into the vagina just before having sex. Spermicide kills the man’s sperm before it can get into the womb. If used alone, spermicide is less effective than some other methods. But it is helpful when used as extra protection along with another method, like the diaphragm or condom. Spermicides can be bought in many pharmacies and markets. Some women find that some types of spermicides cause itching or irritation inside the vagina. Spermicides do not provide protection against any STI. Because spermicides can irritate the walls of the vagina, they may cause small cuts that allow HIV to pass more easily into the blood. When to insert spermicide: Tablets or suppositories should be put in the vagina 10 to 15 minutes before having sex. Foam, jelly, or cream work best if they are put in the vagina just before having sex. If more than one hour passes before having sex, add more spermicide. Add a new tablet, suppository, or applicator of foam, jelly, or cream each time you have sex. How to insert spermicide: - Wash your hands with soap and water. To use foam, shake the foam container rapidly, about 20 times. Then press the nozzle to fill the applicator. To use jelly or cream, screw the spermicide tube onto the applicator. Fill the applicator by squeezing the spermicide tube. To use vaginal tablets, remove the wrapping and wet them with water or spit on them. (DO NOT put the tablet in your mouth.) - Gently put the applicator or vaginal tablet into your vagina, as far back as it will go. - If you are using an applicator, press in the plunger all the way and then take out the empty applicator. - Rinse the applicator with clean water and soap. - Leave the spermicide in place for at least 6 hours after sex. Do not douche or wash the spermicide out. If cream drips out of your vagina, wear a pad, cotton or clean cloth to protect your clothes.
<urn:uuid:f1123f7b-f249-45a4-9237-55f3153a2bab>
CC-MAIN-2016-26
http://en.hesperian.org/hhg/Where_Women_Have_No_Doctor:Barrier_Methods_of_Family_Planning
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928449
2,218
2.5625
3
Family: Hydrobatidae, Storm-Petrels view all from this family Description ALL BIRDS Have dark gray-brown plumage, palest on underside of flight feathers and with pale bar on upper wing coverts. Rump is white. Legs are dark and feet have yellow webs. Dimensions Length: 7" (18 cm) Habitat Breeds on islands in southern oceans (in our winter) and moves to North Atlantic outside its breeding season. Common offshore visitor (present mainly May-Sep) and seldom seen within sight of land. Observation Tips Easy to see on pelagic and whale-watching trips during summer months. Sometimes gathers in large concentrations where feeding is good, e.g. around fishing boats. Range Mid-Atlantic, California, New England, Eastern Canada, Southeast, Florida Voice Silent at sea. Discussion Tiny seabird that looks all-dark at a distance, but with a striking and contrasting white rump and undertail coverts. At close range, note the square-tipped tail (may appear a bit notched when not spread), the relatively long legs (when outstretched, the toes project beyond the tail), and the pale panel on the upper wing coverts. It glides on outstretched, flat wings and also flutters low over the water, pattering the surface with its dangling feet. Sexes are similar.
<urn:uuid:8aaee9e8-8088-4b81-9ac7-ca9d45001aea>
CC-MAIN-2016-26
http://enature.com/fieldguides/detail.asp?shapeID=957&curGroupID=1&lgfromWhere=&curPageNum=11
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907607
287
2.96875
3
The Office of International Affairs (IA) coordinates DOE participation in the Energy and Climate Partnership of the Americas (ECPA). DOE and the State Department’s Office of Western Hemisphere Affairs chair ECPA for the United States. ECPA is a key multilateral mechanism to advance clean energy deployment and reduce the climate change impacts of energy technologies in the Western Hemisphere. ECPA is a flexible mechanism through which governments in the Western Hemisphere, on a voluntary basis, may lead multi-country or bilateral initiatives. Announced at the Summit of the Americas in 2009, the goals of the partnership are to accelerate clean energy development and deployment, advance energy security, and reduce energy poverty by sharing best practices, encouraging investment, and cooperating on technology research, development and deployment. ECPA comprises seven pillars that address: energy efficiency; renewable energy; cleaner and more efficient use of fossil fuels; energy infrastructure; energy poverty; sustainable forests and land use; and adaptation. Over 40 ECPA projects have been undertaken. The Organization of American States (OAS) hosts the ECPA website, which contains details on the many ECPA activities completed and undertaken, and serves as clearing house for projects, events and activities targeting energy and climate issues in the region. Planning for the next ECPA Energy Ministerial is underway for late 2014 and will be hosted by the Government of Mexico.
<urn:uuid:510e1165-18eb-4749-987b-5fc8c145f65d>
CC-MAIN-2016-26
http://energy.gov/ia/initiatives/energy-and-climate-partnership-americas
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924139
272
2.671875
3
common name: tree snails of Florida scientific name: Gastropoda: Stylommatophora: Bulimulidae Many snails are found in trees, but only a few are exclusively arboreal for most or all of their life cycle. Tree snails are normally found on the ground only during egg-deposition or when dislodged from their perches. They are frequently large, up to 70 mm long, but tend to be smaller in colder areas. They are restricted to tropical and semi-tropical regions by their need for high humidity and warm temperatures. Tree snails are included in several families, but the Bulimulidae and the Pupillidae are the only two represented on the United States mainland. In the Americas, the center of diversity of the Bulimulidae is in northern South America to Brazil, with representatives spreading northward through Central America and the Caribbean to the southeastern United States (Solem 1969, Breure 1979). The bulimulids are not exclusively arboreal as many species live in leaf-mold, under or near rocks, or on rock faces. However, all native Florida bulimulids are arboreal. The United States has four native genera of Bulimulidae: Rhabdotus, Drymaeus, Orthalicus, and Liguus. The last three genera are native to Florida. There is also one recently introduced genus in Florida, Bulimulus, which is primarily terrestrial (Thompson 1976). The systematic relationships of the native species were summarized by Pilsbry (1946). The arboreal representatives feed on epiphytic growths such as algae, fungi and lichens on trees. Figure 1. Orthalicus reses (Say) [left], a federally listed threatened species, and Achatina fulica (Bowditch) [right], a major agricultural pest similar in appearance to Orthalicus reses. Photographs by Division of Plant Industry. Orthalicus reses (Say) is a federally listed, threatened species due to restricted range and habitat destruction and cannot be legally collected without a federal permit. Liguus fasciatus (Müller) has been proposed as an endangered species in the past but has not been so designated. Most of the other native Florida bulimulids appear to be wide-ranging and numerous. Except for scientific study, these snails should not be collected, as they are not agricultural pests and may actually be beneficial, because they feed on epiphytic growths. The bulimulids of Florida have ovate-conical or bulimoid shells that at maturity range in size from 15 mm to 70 mm. With the exception of Liguus fasciatus, these snails have shells that vary in color from ivory to tan, often with brown markings. Liguus shells are brilliantly colored and are frequently marked with yellow, green, pink, and brown. The bulimulid shell surface is smooth, sometimes glossy, and with protuberances. Live snails are most often found in native hammock trees and shrubs, but frequently live in citrus groves and backyards. Figure 2. Key identification features. Drawing by Division of Plant Industry. 1. Mature shell larger than 40 mm, umbilicus imperforate, apex microscopically smooth . . . . . 5 1'. Mature shell smaller than 40 mm, umbilical perforation narrow, apex microscopically sculptured . . . . . 2 2(1'). Shell thin, translucent to almost transparent, fragile . . . . . 4 2'. Shell solid, opaque to slightly translucent, not fragile . . . . . 3 3(2'). Shell with vertical chestnut brown stripes, blue to black apex . . . . . lined tree snail, Drymaeus multilineatus (Say, 1825). Brown subsutural and basal bands are also present, and can be as wide as 2 mm in some Keys specimens, or lacking altogether. This species is found on terminal twigs of both native and exotic trees and shrubs in the southern counties of Florida, in the Florida Keys, and in the Caribbean. Figure 3. The lined forest snail, Drymaeus multilineatus (Say, 1825). Photograph by Bill Frank, www.jaxshells.org 3'. Shell lacking vertical stripes, apex brown to ivory . . . . . West Indian Bulimulus, Bulimulus guadalupensis (Bruguière, 1789). This shell is marked by one to two faint or three strong brown spiral bands and a narrow white subsutural band. Introduced from Puerto Rico, this species is found on low-lying ground-covers and in lawns in southeastern Florida and is moving northward. In addition, in 2009 and 2010, populations were reported in Duval and Nassau counties, approximately 200 miles north of confirmed populations (Frank and Lee 2010). Figure 4. The West Indian Bulimulus, Bulimulus guadalupensis (Bruguière, 1789). Photograph by Bill Frank, www.jaxshells.org. 4(2). Shell 25 to 30 mm, with 3 to 4 wide spiral rows of chestnut-brown squares on the body whorl, lip of aperture in mature shell slightly flared . . . . . Manatee treesnail, Drymaeus dormani (Binney, 1857). The markings can be faint to lacking in some specimens. This species is endemic to North and Central Florida north of Lake Okeechobee, and has been reported on palmetto, orange and grapefruit trees (Pilsbry 1946). Figure 5. The manatee treesnail, Drymaeus dormani (Binney, 1857). Photograph by Phil Poland, www.jaxshells.org. 4'. Shell 15 to 25 mm, with 3 to 5 irregular narrow brown bands on the body whorl, lip of aperture not flared . . . . . Master treesnail, Drymaeus dominicus (Reeve, 1850). The bands can be unevenly broken or even lacking. This species can be differentiated from Drymaeus dormani by the rounder whorls, smaller adult size, and lack of a flared apertural edge. It is found on citrus and native trees in southeastern Florida south of Lake Okeechobee to the Florida Keys and parts of the Caribbean. Figure 6. The master treesnail, Drymaeus dominicus (Reeve, 1850). Photograph by Phil Poland, www.jaxshells.org. 5(1). Length of aperture more than half overall length, shell thin-walled, external markings visible inside the aperture . . . . . 6 5'. Length of aperture less than half overall length, shell heavy and porcelain-like, aperture white to faintly pink inside . . . . . Florida tree snail, Liguus fasciatus (Müller). The color patterns in this species are extremely variable. At this time, there are 58 named color forms in South Florida and the Florida Keys (Davidson 1965, Jones 1979, Diesler 1982), with others in Cuba. This animal is generally found on smooth-barked trees in native hammocks. Figure 7. The color patterns of the Florida tree snail, Liguus fasciatus (Müller), are extremely variable. Photograph by Bill Frank, www.jaxshells.org. 6(5). Shell with irregular, flame-like, vertical brown stripes . . . . . 7 6'. Shell lacking flame-like stripes . . . . . banded tree snail, Orthalicus floridensis Pilsbry, 1891. This is the largest Florida tree snail, and is tan with two to three spiral brown bands and one to four dark brown vertical growth lines. Both the margin of the aperture and the parietal callus are dark brown. This native species is endemic to South Florida and the Florida Keys on native and introduced trees. Figure 8. The banded tree snail, Orthalicus floridensis Pilsbry 1891, is the largest Florida tree snail. Photograph by Robert Pilla, www.jaxshells.org. 7(6). Apex white, parietal callus clear or faintly chestnut . . . . . Stock Island tree snail, Orthalicus reses reses (Say, 1830). This snail and the next subspecies, Orthalicus reses nesodryas Pilsbry, have been confused with the foreign snail Achatina fulica (Bowdich). However, they can be differentiated from Achatina fulica because they have a greyish cast (never reddish) to the stripes, underlying spiral bands, and a columella continuous with the aperture, not truncate. Orthalicus reses reses is endemic to Stock Island, Monroe County, where it is found on a variety of native and exotic trees. Figure 9. The Stock Island treesnail, Orthalicus reses reses (Say, 1830). Photograph by Bill Frank, www.jaxshells.org. 7'. Apex and parietal callus dark chestnut-brown . . . . . Florida Keys treesnail, Orthalicus reses nesodryas Pilsbry, 1946. This subspecies is endemic to the Florida Keys, from Lower Matecumbe Key to Key West, and can be found on a variety of host trees. Figure 10. The Florida Keys treesnail, Orthalicus reses nesodryas Pilsbry, 1946. Photograph by Bill Frank, www.jaxshells.org. - Breure ASH. 1979. Systematics, phylogeny and zoogeography of Bulimulinae. Zoologische Verhandelingen, Leiden, No. 168. 215 pp. - Davidson T. 1965. Tree snails, gems of the Everglades. National Geographic 127: 372-387. - Deisler JE. 1982. Florida tree snail. In Prichard PCH. (ed.), Rare and Endangered Biota of Florida: Invertebrates 6: 15-18. - Frank B, Lee H. (June 2010). Bulimulus sp. aff. guadalupensis (Bruguière, 1789) West Indian Bulimulus. http://www.jaxshells.org/gallery5.htm (8 December 2014). - Jones AL. 1979. Descriptions of six new forms of Florida tree snails. Nautilus 94: 153-159. - Pilsbry HA. 1946. Land Mollusca of North America. Academy of Natural Sciences Philadelphia Monographs 3: 1-520. - Solem A. 1969. Basic distribution of non-marine molluscs. Symposium on Mollusca, Proceedings of the Cochin 1968 Marine Biology Association India. Symposium Series 3: 231-247. - Thompson FG. 1976. The occurrence in Florida of the West Indian land snail, Bulimulus guadaloupensis. Nautilus 90: 10.
<urn:uuid:f19a147b-4d98-4702-9db3-0a2a73bf1177>
CC-MAIN-2016-26
http://entomology.ifas.ufl.edu/creatures/misc/gastro/tree_snails.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.887162
2,356
3.65625
4
Found over sandy areas of lagoon and seaward reefs (Ref. 9710). Juveniles are common in seagrass beds (Ref. 9710). Often in schools (Ref. 26938). Forms schools with the smallmouth grunt (Haemulon chrysargyreum), an association regarded as social protective mimicry (Ref. 52492). Feeds on benthic invertebrates (Ref. 7313). Feeds mostly on benthic organisms (Ref. 33, 26338). Microinvertivore (Ref. 33499, 057616).
<urn:uuid:1aa11c11-423a-4acb-891c-539f2b5d0590>
CC-MAIN-2016-26
http://eol.org/data_objects/20914390
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.785648
121
2.625
3
Photographer: Curtis McQueen Summary Authors: Curtis McQueen; Jim Foster The photo at top shows odd, regularly spaced splotches on the bark of all the ash logs in our woodpile. These appeared in April, a week or two after the white ash wood was cut, split and stacked. At first, we worried that they were the telltale signs of a pest known as the powderpost beetle but how could we be sure? By placing a piece of the firewood in a ziplock bag for several days, I was eventually able to capture several emerging beetles for a definite identification. I went, bag in hand, to our local beetle expert Dr. Greg Setliff at Kutztown University He took one look and immediately recognized our soil makers as the humble eastern ash bark beetle ). We looked at my captured beetles under a microscope and they were identical to specimens in Dr. Setliff's collection all with a distinctive pattern on their backs. These beetles are small, about 1/16 in (2 to 2.5 mm) long. The splotches on our logs are called frass - chewed bits of wood surrounding the beetles' exit holes. The frass will eventually fall to the ground and become a small but welcome organic addition to the soil mix here at our house. I checked the hole size under one of the frass piles (inset) and found it to be about 1/32 in (1 mm) in diameter. The frass is very powdery and light -- it just blows away with a puff of air. Photo taken in April 2013 from Kempton, Pennsylvania. Photo details: Top - Camera Maker: FUJIFILM; Camera Model: FinePix S1500; Focal Length: 5.9mm; Aperture: f/2.8; Exposure Time: 0.022 s (1/45); ISO equiv: 400; Software: Digital Camera FinePix S1500 Ver1.03. Bottom - same except: Exposure Time: 0.040 s (1/25); ISO equiv: 800.
<urn:uuid:e7d35d93-3e73-4391-9595-54615b80c82a>
CC-MAIN-2016-26
http://epod.usra.edu/blog/2013/06/bark-beetle-infestation.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948348
436
2.53125
3
Guerin, Andrew James Marine Communities of North Sea Offshore Platforms, and the Use of Stable Isotopes to Explore Artificial Reef Food Webs. University of Southampton, School of Ocean and Earth Science, Stable isotope methods offer a powerful means of investigating trophic interactions, allowing assessment of the relative importance of multiple nutrient sources to biological assemblages, as well as estimation of the trophic positions of consumers. Differences in the isotope ratios of consumers between habitats can thus indicate differences in the structures of food webs, or the contributions of different food sources to those food webs. Isotope methods were used to compare the food web of an artificial reef located off the south coast of England with that of a nearby natural reef system, revealing a similarly complex food web, with similar trophic structure, and similar inputs from the available food sources. Isotope methods should be incorporated into more artificial reef studies, where they have been seldom applied. Offshore oil and gas platforms in the North Sea are artificial reefs, hosting substantial assemblages of sessile invertebrates and other associated fauna, and attracting large numbers of fish and motile invertebrates. Structural survey footage provided by the oil and gas industry allowed the investigation of the marine life associated with several of these structures, of varied ages and in various locations in the North Sea. At least thirty?six taxa of motile invertebrates and fish were observed in association with the structures, most of which were present on all platforms surveyed. While most reef?associated fish were observed around the base of the larger platforms, many thousands of fish were also observed in the water column around these structures at other depths. A small number of sessile taxa dominated the fouling assemblages, in places achieving total coverage of the available surfaces. Fouling composition changed with depth, but this pattern was not identical on all platforms. Platform age and location both affected the fouling assemblages present, but these two factors did not fully explain all the variation. Actions (login required)
<urn:uuid:01d9cb8d-6d50-4fdf-bd1c-85aab1fd720e>
CC-MAIN-2016-26
http://eprints.soton.ac.uk/168947/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93397
461
2.703125
3
Craig, O., Mulville, J., Parker Pearson, M., Sokol, R., Gelsthorpe, K., Stacey, R. and Collins, M.J. (2000) Detecting milk proteins in ancient pots. Nature, 408 (6810). p. 312. ISSN 1476-4679Full text not available from this repository. [First paragraph] Deciding whether to farm cattle for milk or beef was just as complex in the past as it is today. Compared with meat production, dairying is a high-input, high-output, high-risk operation indicative of an intensive, sophisticated economy, but this practice is notoriously difficult to demonstrate in the archaeological record. Here we provide evidence for the presence of milk proteins preserved in prehistoric vessels, which to our knowledge have not been detected before. This finding resolves the controversy that has surrounded dairying on the Scottish Atlantic coast during the Iron Age and indicates that farming by the early inhabitants of this harsh, marginal environment was surprisingly well developed. |Copyright, Publisher and Additional Information:||© 2000 Macmillan Magazines Ltd| |Institution:||The University of Sheffield, The University of York| |Academic Units:||The University of Sheffield > Faculty of Arts and Humanities (Sheffield) > Department of Archaeology (Sheffield) The University of York > Archaeology (York) |Depositing User:||Matthew J. Collins| |Date Deposited:||01 Dec 2005| |Last Modified:||06 Jul 2009 10:24|
<urn:uuid:47cc1454-6d18-493a-bdcd-41798644f0f7>
CC-MAIN-2016-26
http://eprints.whiterose.ac.uk/802/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.878248
324
2.953125
3
Why Introduce a New Mapping Tool? The Food Access Research Atlas is a mapping tool that allows users to investigate multiple indicators of food access, expanding upon what was previously known as the Food Desert Locator. A major feature of the new Atlas is that it allows users to view several additional measures of food access, as well as other important indicators of supermarket access. The Atlas also provides updated census-tract estimates of areas with food access limitations using more recent data and improved methods, and offers contextual information for all census tracts in the United States. In the new tool, updated estimates of food-desert census tracts—low-income census tracts where a substantial number or share of people are far from supermarkets—can be viewed and mapped. New additional measures of low-income and low-access census tracts are also estimated and mapped. Two of these new measures use alternative distance markers for assessing how far residents of the census tract are from the nearest supermarket. A third measure directly considers household vehicle availability, since access to a vehicle is an important factor for food access. In addition to these measures of low-income and low-access census tracts, contextual information, such as the share of people living in group quarters in the census tract and whether the tract is a low-income tract, can also be mapped. Each of these indicators is available for all census tracts in the United States, including tracts in Alaska and Hawaii.
<urn:uuid:3c33f25b-c9a7-4aea-b2bc-4e73a0b8b8c6>
CC-MAIN-2016-26
http://ers.usda.gov/data-products/food-access-research-atlas/why-introduce-a-new-mapping-tool.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954432
287
3.078125
3
One of the most prolific and most famous composers of his time, Georg Philipp Telemann was a student at the University of Leipzig . Telemann established a highly successful University Collegium Musicum in Leipzig. In 1721, Telemann was appointed Cantor of the Johanneum in Hamburg and director of music of the major city churches. Telemann lived and worked in Hamburg all his life and was succeeded by the son of J.S. Bach and Telemannís godson, Carl Philipp Emanuel Bach. In his long career Telemann may be considered a link between the late Baroque and early Classical periods. More Telemann sheet music download on EveryNote.com
<urn:uuid:ba641585-c80c-45c1-8983-204582dc6efd>
CC-MAIN-2016-26
http://everynote.com/flute.choose/0/270/_/_.note
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950858
140
2.8125
3
Landscape Plantings for Energy Savings David H. Trinklein Horticulture State Specialist Division of Plant Sciences A typical plan for windbreaks. The high cost of home heating and air conditioning often prompts homeowners to explore ways to reduce home energy use. Outdoor landscape plants, which help control erosion and are pleasing in themselves, can play a large part in controlling energy use indoors. Therefore, in addition to the beauty of landscape plantings, it is important to consider the entire landscape plan in relation to energy conservation in the home. Windbreaks reduce air movement around the home and thereby slow heat loss from the walls. The most effective windbreaks can reduce wind velocity as much as 50 percent. Windbreaks can also deflect wind movement. The use of windbreaks for winter climate control around the home can reduce winter fuel use by 10 to 25 percent. The effectiveness of a windbreak is determined by the number of rows of plants, type of plants, height of plants, prevailing wind speeds and proper maintenance. In Missouri, prevailing winter winds are from the north and northwest, so plant protective windbreaks to the north and northwest of the home (Figure 1). Most effective windbreaks are planted in U or L shapes. Where space allows, a windbreak should be planted to extend about 50 feet beyond each corner of the area to be protected. The most effective area of a windbreak is at a distance of four to six times the height of the trees, depending on wind speeds. However, even at a distance as much as 20 times the height of the trees, a windbreak may provide some slight benefit. A 20-foot-tall evergreen creates a windbreak on the northern exposure. Snow will drift 20 to 60 feet to the south or southeast. Planting a windbreak When planning a windbreak, remember that trees can cause snow to drift in an area that extends from their base a distance one to three times their height. If at all possible, plant the windbreak so it does not cause snowdrifts on roads, driveways and walks. For example, a 20-foot-tall windbreak of evergreen trees would provide the greatest wind reduction 80 to 120 feet to the south or southeast. Snow accumulation would be greatest 20 to 60 feet away in the same direction (Figure 2). Although a single row of trees provides some benefit, several rows are much more effective. If space allows for only one row of trees, pines are the most satisfactory for Missouri's climate. However, pines often thin out near the ground as they mature, so them may need to be combined with a row of spreading evergreen shrubs such as yews or junipers. Space evergreen trees in the windbreak 6 to 8 feet apart. Stagger them if planting more than one row. When several rows are used, space rows 12 to 20 feet apart, depending on the mature size of the plants. Points to remember in planning a windbreak - Windbreaks are most effective when plants branch to ground level. - The wider the planting, the more effective the windbreak. - When planting more than one row, stagger the plants. - When using only evergreen plants, two or three rows are adequate. When using deciduous materials, four to five rows are necessary. A mixture of both types is most effective. - Where space is adequate, plant a row of fast-growing species (such as poplar), but plan to remove them as soon as more desirable species have grown tall enough. - When planting several rows of plants, the heights of the rows should vary to give an uneven upper edge. - Windbreaks should allow some wind penetration. Impenetrable windbreaks create a partial vacuum on the protected side, reducing their effectiveness. More details on developing a windbreak planting are available in MU Extension publication G5900, Planning Tree Windbreaks in Missouri. Diverting air movement Although windbreaks function primarily by reducing the impact of the wind, they also shift air movement. The ability of plants to divert air streams provides the greatest benefit during the summer months. Increasing the air flow in play, patio and other living areas improves summer comfort and reduces the need for air conditioning. Garden structures such as screens and fences may influence air movement, but plants add a cooling factor as water evaporates from moist leaves. The prevailing summer winds in Missouri are from the south and southwest, so place plants to be used for channeling breezes around the house in that direction away from the home. Even the windbreak on the north-northwest side of the home channels breezes, but it may not channel them into the home or outdoor living areas in the summer. The house itself blocks air movement and channels breezes around corners if there are no barriers. These increased air movements at corners may be used if they are directed into patios and play areas. Position plants or screens to create a funneling effect, which gathers and concentrates existing currents. This plan is more effective with light breezes because strong winds move up and over a barrier and create a different type of air movement. Deciduous trees on the southwest side of a house reduce indoor air temperature in summer and increase indoor air temperature in winter. Sun and shade The use of plants to control wind speed and movement is important for climate control and energy efficiency, but plant location in relation to the sun is also important. Choose and place plants so they do not form a barrier when direct rays of the sun are needed for warmth in winter. Rather, place plants so they provide shade for the house from the intense heat of the sun in summer (Figure 3). Deciduous trees with heavy shade cover in summer and open branching and complete loss of leaves in winter are the best choices to achieve this result. Planting for summer shade Because of the movement of the sun across the sky in summer, plants south and southwest of the home are most functional for providing summer shade. The best trees for summer shade, which also produce minimum shade in the winter, are those with spreading branches and few fine twigs. Examples of such trees are ginkgo, Kentucky coffee tree, white ash and green ash. Trees selected for this use should mature large enough to throw shade on the roof of the house on a midsummer afternoon. To shade the roof of a one-story home about 20 feet high, place the tree 15 to 20 feet from the home. Large trees should not be placed closer than 20 feet whereas medium-sized trees may be placed as close as 15 feet away. Small, flowering trees such as redbud may be placed closer than 15 feet to provide some shade. Although they may be used for wall or window shade, they do not grow large enough to provide adequate roof shade. Trees used for shade may also serve other purposes in the landscape. Planting vines for shade In some locations where space is too limited to plant trees for shade, vines may be used. Deciduous vines are most effective on southern and western walls. For masonry walls, which allow clinging species of vines to be used, Virginia creeper or Boston ivy are effective. These vines are deciduous, shading the wall in summer but dropping leaves in early fall to allow warming of the walls in winter. Clinging vines are not suitable for wood walls because they tend to hold in moisture and speed wood decomposition. Where clinging vines cannot be used, twining vines may provide needed shade. They may be trained onto trellises placed near, but not against, the walls. Wisteria or bittersweet may be used in such locations. Even nontwining "climbers" such as climbing roses may serve this purpose when trained onto trellises. In some locations, overhead structures such as arbors may serve a dual purpose by providing shade to patios and, at the same time, casting shade on walls and windows to keep the indoors cooler. Wisteria is a favorite vine for such overhead structures, although any vigorous, fast-growing vine is suitable. Evergreens adjacent to the northwest sides of a house reduce wind speed and create dead air space for insulation. Dead air space In addition to creating a windbreak, plants can be used to create dead air space along walls and thus provide extra insulation. Foundation planting of evergreens cuts out air movement close to the house and creates a layer of still air behind it. This still or slow-moving air forms an insulating layer that reduces the greater heat loss caused by moving air. Vines, particularly the evergreen types such as English ivy, can also reduce heat loss in the same way. This technique is particularly useful for north walls where the sun never shines (Figure 4). Evergreen plants are most effective for foundation plantings because their insulating effect is desirable in both summer and winter. The most effective use of plants for this purpose comes from a continuous line that extends along the wall and around the corners. To achieve an attractive effect, use different kinds of plants with a variety of leaf textures, heights, forms and shades of green. Yew, juniper, mugo pine and holly are a few evergreen plants suitable for these plantings, depending on climates and suitable locations. Summer reduction of air temperature by evaporative cooling as water passes through plant leaves is also important for comfort and for reducing air conditioning needs. Large green areas or a grouping of trees produce the cooling effect of a forest. The lawn or large areas of other groundcovers provide much cooler surfaces than comparable areas of bare ground or paved surface. To keep the air cooler, include a minimum of paved surfaces in the landscape, or locate them where they are shaded during the hottest parts of the summer day. Nonpaved surfaces should have green cover over them, either lawn, groundcover or larger plants. Plants not only control erosion and beautify the landscape, but they also make homes more comfortable and save energy. Ray R. Rothenberger
<urn:uuid:ed7626d3-1d4d-48df-a753-8160a0e9fe41>
CC-MAIN-2016-26
http://extension.missouri.edu/publications/displaypub.aspx?p=g6910
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927051
2,054
3.390625
3
First Woman to Fly the English Channel, 1912 Before Amelia Earhart, it was Harriet Quimby who captured the public's imagination as America's premier aviatrix. Born in 1875, Harriet arrived in New York City in 1903, landing a job as a reporter with Leslie's Illustrated Weekly. She was a remarkable woman endowed with natural beauty, flamboyance, a penchant for adventure and a flare for self-promotion. Soon she was traveling the country alone searching for stories, learning to drive an automobile, and displaying an independence not expected of a woman at the time. Harriet in her plum-colored flying suit Her role as a theater critic for Leslie's brought her into contact with New York's "Jet Set" of the day. Here she met a flying instructor and asked him if he would teach her to fly. She began her lessons in May 1911 at a flying school on Long Island becoming the first woman to begin flight training. The airplanes of the day were rickety affairs - collections of wood and canvas held together with piano wire. There was no science of flight, no rules on design or construction, the books were still being written. Experimentation was done in the air and lessons learned through fatal accidents. Harriet proved she had a natural flying ability, excelled in her training and earned her pilot's license in August - the first American woman to achieve this. She immediately joined the exhibition circuit, flying at meets in this country and Mexico. Dressed in her plum-colored, satin flying suit adorned with elegant jewelry, she became an immediate sensation. Her articles in Leslie's sold thousands of newspapers and spread her fame. In 1912 she traveled to England convincing the London Daily Mirror to fund her attempt to fly the English Channel in exchange for exclusive coverage. It had been only 3 years since Frenchman Louis Bleriot successfully flew across the Channel and the attempt had never been made by a woman. Harriet made her flight in the early morning hours of April 16, reversing her French predecessor's route by taking off from Dover. She landed her Bleriot monoplane in triumph on the French shore near Calais, assuring her fame and assuming the title "America's First Lady of the Air." It was not to last. Ten weeks later Harriet's life would end in a tragic accident over Boston Harbor. On July 1st, during an exhibition, Harriet flew the event's manager with her to Boston Light at the harbor's entrance. On the return leg, the crowd watched in horror as the plane suddenly lurched and first the manager and then Harriet were flung to their deaths in the harbor 500 feet below. Harriet's aviation career had lasted only eleven months - her candle had burned briefly but brightly. Harriet awoke at 3:30 AM the morning of April 16, 1912 and proceeded to the flying field at Dover with her entourage. Conditions were ideal - a generally clear sky and, most importantly, no wind. However, Harriet would have to hurry to begin her flight before the winds increased: "The sky seemed clear, but patches of cloud and masses of fog here and there obscured the blue. The French coast was wholly invisible, by reason of moving masses of mist. The wind had not come up yet. The smooth grounds of the aerodrome gave me a chance for a perfect start. I heeded Mr. Hamel's (an assistant) warning about the coldness of the channel flight and had prepared accordingly. Under my flying suit of wool-back satin I wore two pairs of silk combinations, over it a long woolen coat, over this an American raincoat, and around and across my shoulders a long wide stole of sealskin. Even this did not satisfy my solicitous friends. At the last minute they handed me up a large hot-water bag, which Mr. Hamel insisted on tying to my waist like an enormous locket. It was five-thirty A.M. when my machine got off the ground. The preliminaries were brief. Hearty handshakes were quickly given, the motor began to make its twelve hundred revolutions a minute, and I put up my hand to give the signal of release. Then I was off. The noise of the motor drowned the shouts and cheers of friends below. In a moment I was in the air, climbing steadily in a long circle. I was up fifteen hundred feet within thirty seconds. From this high point of vantage my eyes lit at once on Dover Castle. It was half hidden in a fog bank. I felt that trouble was coming, but I made directly for the flagstaff of the castle, as I had promised the waiting Mirror photographers and the moving-picture men I should do. Harriet steps into the cockpit April 16, 1912 In an instant I was beyond the cliffs and over the channel. Far beneath I saw the Mirror's tug, with its stream of black smoke. It was trying to keep ahead of me, but I passed it in a jiffy. Then the quickening fog obscured my view. Calais was out of sight. I could not see ahead of me or at all below. There was only one thing for me to do and that was to keep my eyes fixed on my compass. My hands were covered with long Scotch woolen gloves which gave me good protection from the cold and fog; but the machine was wet and my face was so covered with dampness that I had to push my goggles up on my forehead. I could not see through them. I was traveling at over a mile a minute. The distance straight across from Dover to Calais is only twenty-five miles, and I knew that land must be in sight if I could only get below the fog and see it. So I dropped from an altitude of about two thousand feet until I was half that height. The sunlight struck upon my face and my eyes lit upon the white and sandy shores of France. I felt happy, but could not find Calais. Being unfamiliar with the coast line, I could not locate myself. I determined to reconnoiter and come down to a height of about five hundred feet and traverse the shore. Meanwhile, the wind had risen and the currents were coming in billowy gusts. I flew a short distance inland to locate myself or find a good place on which to alight. It was all tilled land below me, and rather than tear up the farmers' fields I decided to drop down on the hard and sandy beach. I did so at once, making an easy landing. Then I jumped from my machine and was alone upon the shore. But it was only for a few moments. A crowd of fishermen - men, women and children each carrying a pail of sand worms - came rushing from all directions toward me. They were chattering in French, of which I comprehended sufficient to discover that they knew I had crossed the channel. These humble fisherfolk knew what had happened. They were congratulating themselves that the first woman to cross in an aeroplane bad landed on their fishing beach." Locals carry Harriet up the beach after her landing Holden, Henry M., Her Mentor was an Albatross (1993); Quimby Harriett," An American Girl's Daring Exploits" Leslie's Weekly (May 16, 1912), reprinted in: Harris, Sherwood, The First to Fly: Aviation's Pioneer Days (1970); Wohl, Robert, A Passion for Wings (1994). How To Cite This Article: "First Woman to Fly the English Channel, 1912," EyeWitness to History, www.eyewitnesstohistory.com (2002).
<urn:uuid:ab23aafe-c558-43ec-b408-09ee676d2a20>
CC-MAIN-2016-26
http://eyewitnesstohistory.com/quimby.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977673
1,578
3.078125
3
ANSI Common Lisp 3 Evaluation and Compilation 3.8 Dictionary Evaluation and Compilation - Arguments and Values: form - a form. results - the values yielded by the evaluation of form. Evaluates form in the current dynamic environment and the null lexical environment. eval is a user interface to the evaluator. The evaluator expands macro calls as if through the use of macroexpand-1. Constants appearing in code processed by eval are not copied nor coalesced. The code resulting from the execution of that are eql to the corresponding objects in the source code. (setq form '(1+ a) a 999) 999 (eval form) 1000 (eval 'form) (1+ A) (let ((a '(this would break if eval used local value))) (eval form)) - See Also: Section 3.1.2 The Evaluation Model To obtain the current dynamic value of a symbol, use of symbol-value is equivalent (and usually preferable) to use of eval. Note that an eval form involves two levels of evaluation for its argument. First, form is evaluated by the normal argument evaluation mechanism as would occur with any call. The object that results from this normal argument evaluation becomes the value of the form parameter, and is then evaluated as part of the eval form. (eval (list 'cdr (car '((quote (a . b)) c)))) b The argument form (list 'cdr (car '((quote (a . b)) c))) is evaluated in the usual way to produce the argument (cdr (quote (a . b))); eval then evaluates its argument, (cdr (quote (a . b))), to produce b. Since a single evaluation already occurs for any argument form in any function form, eval is sometimes said to perform "an extra level of evaluation." - Allegro CL Implementation Details:
<urn:uuid:0124cfd0-a9c0-4ec5-8b88-68f64b2a3a3d>
CC-MAIN-2016-26
http://franz.com/support/documentation/8.2/ansicl/dictentr/eval.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.758872
421
3.3125
3
If humans were to build a Dyson sphere, we would use all the matter in the solar system, not just a little bit of the Earth. Except that in the Sun, of course. Jupiter alone is over 300x the mass of the Earth. Besides astonishing physical manipulation capabilities, building such a thing would require the ability to transmute matter, changing it into other elements, and probably to manipulate matter on the nano scale in massive quantities. Not presently known to exist material is strong enough to build into a Dyson sphere. No presently known material is strong enough to build Please see my post #31. It applies here as well.
<urn:uuid:fa2a9512-2a30-4582-826a-192ad0de0501>
CC-MAIN-2016-26
http://freerepublic.com/focus/f-chat/2940469/replies?c=28
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961291
132
2.6875
3
Middle Park Fossils from the USGS-Denver Collections by Kevin C. McKinney, Robert O'Donnell, and William A. Cobban The USGS-Denver has a particularly large collection of fossils from the region around Middle Park, Colorado. In conjunction with geological mapping projects between 1960 and 1975, a diverse array of mammal fossils were collected from the Middle Park Troublesome Formation by Glen Izett, Ed Lewis, and Bob O'Donnell. During the same period Glen Izett and Bill Cobban collected Cretaceous ammonites and inoceramids. The fossils were instrumental in providing stratigraphic age control for the geologic mapping projects. Finding the Camel Fossil was the Fun Part Collecting the camel fossils pictured on the Middle Park vertebrates page required delivering supplies and planning the path for the plaster jackets back up the hill. |The fragile fossil was secured in plaster jackets. Then the last bits of matrix were removed.| |Plaster jackets were slowly winched up the hill to the field truck.|
<urn:uuid:a236bccd-e235-4f15-9efb-75bf1324f435>
CC-MAIN-2016-26
http://geology.cr.usgs.gov/crc/fossils/middlepark.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92894
213
2.5625
3
This text is not intended as a tutorial to help in learning the Korean alphabet. Instead, it is written for web designers and other people who need a superficial knowledge how to encode Korean characters in an HTML document. Basic knowledge of Unicode character representation and HTML character references is necessary to understand this text. If you don’t know about Unicode, Character Sets, Encodings and I18n, please read some tutorial, e.g., Character Set Issues by A.J. Flavell. The Hangul are composed of letters (jamo, 자모) in a rather systematic way. The Jamo represent sounds similar to the way how Latin letters represent sounds. Although there are 11172 different Hangul, their individual appearances need not to be memorized; rather, one has to learn the 68 different Jamo shapes and the rules governing the construction of Hangul from Jamo. Moreover, even the Jamo shapes are not arbitrary: For example, the Jamo ㅆ (SS) looks like a duplicated ㅅ (S), and the Jamo ㄵ (NJ) is a juxtaposition of ㄴ (N) and ㅈ (J). Some Jamo shapes are obviously mnemonic, e.g., ㅁ (M) which symbolizes a closed mouth articulating the sound /m/. It is now fairly established that this mnemonic character applies to all Jamo consonants even if they look rather arbitrary: The key is the shape and position of the tongue when producing the consonant sound. There are 19 different lead consonants, including the mute consonant. The following table gives these consonants in their canonical order, and their Unicode values. Consonant number 12 is the mute consonant. The vowels number 21 in Korean. Note that some of these vowels would be classified as diphthongs in other languages; also, some vowels contain an Y as part of the vowel. The total number of tail consonants is 27; some of them are very rarely used in modern Korean. Most of the tail consonants can also appear as leads; the Jamo for these consonant pairs look very similar. Tail consonant 21 ㅇ (NG) corresponds to the mute lead consonant ㅇ (12). This correspondence is purely graphical and has no deeper meaning; some fonts will distinguish the two characters by a small vertikal stroke attached to the NG letter, while other will render them identically as a (squashed) circle. Actually, Unicode has another range of Jamo characters called Hangul Compatibility Jamo, starting at U+3100. These represent the same letters, but they have no conjoining behaviour as described in the next section. The compatibility Jamo are rather unlikely to appear in a real Korean document, but they can be used if isolated Jamo must be shown in a text, for example an instruction of the writing system (like the one you are currently reading). In this document, I use compatibility Jamo almost everywhere, as they tend to render more cleanly. In the tables on the right side, each Jamo is given twice: First as conjoinig Jamo and then as compatibility Jamo, while the hexadecimal codepoint refers to the former. The differences (if any) can be demonstrated here: Jamo GG ᄁ (head) and ᆩ (tail) and Compatibility Jamo GG ㄲ. What you will see depends on your operating system, your browser, your default font and even the font size. While the Compatibility Jamo will probably look all right, the isolated conjoining Jamo may appear identical, smaller (and raised or lowered), overlapping their neighbours, or even empty. Therefore, a Korean text can be seen as a sequence of Hangul, each of which represents one spoken syllable. In this view, Korean script would be seen as a syllabary comparable to East African Ge’ez script, and to lesser degree, Japanese (kana) scripts. On the other hand, one could understand Korean writing without reference to Hangul at all; according to this perspective, Korean writing is as alphabetical as the Latin script, but uses complicated typographic rules to determine the placement of any Jamo relative to its predecessor and successor. The latter view also reminds to Indic scripts of the Brahmi family, where an arbitrary number of consonant signs plus a vowel are graphically combined into a syllable glyph; the combination rules for Indic scripts are, however, much more involved. It has been argued that the Indic model influenced the construction of the Hangul script via the Tibetean Phagspa script. Phagspa was a short-lived script and is now extinct; it is not the predecessor of the modern Tibeten script. Canonical equivalence even extends to mixed cases of Hangul HWEO 훠 plus tail Jamo LH ㅀ, see here 훯 for a live example (support for this construction is much worse than for the previous). However, there is no canonical nor compatibility equivalence that would allow you to decompose a complex Jamo like LH into its constituents (L and H); therefore, you cannot repesent the HWEOLH syllable by something like Jamos H+WEO+L+H or by Hangul HWEOL plus tail Jamo H. The common way of coding Korean text is to use the precomposed Hangul syllables that do not explicitly reference the underlying Jamo characters. Isolated Jamo are rarely found in Korean texts. The Unicode Standard assigns an individual code point to each Hangul. To calculate the code point of a Hangul from its Jamo components, the following formula may be used: Code point of Hangul = tail + (vowel−1)*28 + (lead−1)*588 + 44032 In this formula, lead, vowel and tail refer to the small integer numbers given in the above tables (if there is no tail consonant, use the value 0). The Hangul syllabary occupies the Unicode range from AC00 (decimal 44032) to D7A3 (decimal 55171). In UTF-8, each Hangul needs three bytes (the same is also true for the Jamo, which is another reason why they are almost never used for encoding Korean text). In the other direction, the phonetic value of a Hangul can be calculated from its code point. It is convenient to use the modulo function mod(a,b), which yields the remainder of the quotient a/b, and the integer function int(a) which yields the integer part of a. tail = mod (Hangul codepoint − 44032, 28) vowel = 1 + mod (Hangul codepoint − 44032 − tail, 588) / 28 lead = 1 + int [ (Hangul codepoint − 44032)/588 ] To illustrate the formulae, let us consider the writing of the words jamo and hangul in Hangul. The Hangul neccessary are called JA, MO, HAN and GEUL in Unicode. |lead consonant||J ㅈ (13)||M ㅁ (7)||H ㅎ (19)||G ㄱ (1)| |vowel||A ㅏ (1)||O ㅗ (9)||A ㅏ (1)||EU ㅡ (19)| |tail consonant||– (0)||– (0)||N ㄴ (4)||L ㄹ (8)| |Hangul code point (dec)||51088||47784||54620||44544| |Hangul code point (hex)||C790||BAA8||D55C||AE00| As an inverse problem, we now analyse the two Korean words 서울 and 평양: |Hangul code point (hex)||C11C||C6B8||D3C9||C591| |Hangul code point (dec)||49436||50872||54217||50577| |Code point − 44032||5404||6840||10185||6545| |tail consonant||– (0)||L ㄹ (8)||NG ㅇ (21)||NG ㅇ (21)| |vowel||EO ㅓ (5)||U ㅜ (14)||YEO ㅕ (7)||YA ㅑ (3)| |lead consonant||S ㅅ (10)||– ㅇ (12)||P ㅍ (18)||– ㅇ (12)| So the the two words actually stand for the capitals of South and North Korea, respectively. These are usually rendered in Latin script as Seoul and Pyeongyang, although other romanizations are possible (e.g., Sŏul and P’yŏngyang or Pyongyang). Note that Unicode contains also obsolete or archaic Jamo that are absent from standard writing. They might still be used in reproducing historical texts, writing Korean dialects or transcribing Chinese words. Unicode does not offer precomposed hangul with these rare letters; instead, syllables containing them must be coded by jamo letters (if only the tail of a syllable is archaic, then the mixed representation with an open-syllable hangul followed by the archaic tail is also possible). An example is the archaic Z ㅿ appearing in the hangul ZIZ ᅀᅵᇫ or GOZ 고ᇫ or 고ᇫ (unlikely to render correctly) The formulae in the previous section should enable you to analyze a Hangul encountered in some dark corner, or to construct a Hangul out of its Unicode name. To illustrate the formulae, and to make things easier to use, I offer a Hangul Construction Form that allows you to create Hangul from their components, or to decompose them. You might also construct the Hangul for fun, or to study how the visual appearance depends on the size and shape of the Jamo constituents. You can select constituent Jamo (by Unicode name) from the dropdown menues, or enter data into the text fields. The text fields will accept either valid Unicode names of the right kind (e.g., "N" in the first, "YE" in the second, "LB" in the third or "BBWEOBS" in the last field) or true Korean characters of the right kind (combining or compatibility Jamo in the first three fields, Hangul in the last). In the course of the calculation, text field contents will be normalized into true Korean characters (actually, Compatibility Jamo in the first three fields), irrespective of the type of input. Input parsing is tolerant on case and blanks, but make sure you delete previous field contents. Clicking the codepoint will display the current form data permanently in a table, which is useful if you wish to study the Hangul shapes in a series with varying Jamo components. Jamo shown are Compatibility Jamo, but Jamo codepoints refer to true combining Jamo. romanized), the question arises whether the spoken or the written language should be followed in devising a romanization scheme. Both appoaches have their merits, and both are actually used. The Unicode names of Jamo and Hangul closely follow the Revised Romanization of Korean, which has official status in Korea. The revised system basically maps each Jamo to one letter (or a polygraph) of the Latin alphabet, thus creating a faithful representation of written Korean in Latin script. It does not, however, represent the actual pronunciation very well. Outside of Korea, the older McCune-Reischauer System continues to be popular. It takes into account certain assimilation phenomena that occur on syllable boundaries, and thus comes closer to the actual pronunciation of Korean. On the other hand, McCune-Reischauer romanizations cannot be constructed trivially from sequences of Jamo. The remainder of this section will describe the procedure briefly. The vowels are transliterated in a rather straightforward way. As special characters, u and o with the breve accent (ŭ, ŏ) are used to denote short reduced vowels. The diacritics are often omitted when publishing on the web. The romanization of consonants is significantly more involved. The complication arises from the fact the tail of any syllable may be assimilated to the lead of the following syllable. Therefore, there are no separate pronunciations of the tail and the lead of consecutive syllables, but both of them are to be pronounced as one single unit. The McCune-Reischauer romanization acknowledges this fact by also romanizing them as a unit. The following table gives the transliteration of tail/lead combinations involving the most common Jamo. The apostrophe is used to disambiguate N’G (sequence of Jamo N + Jamo G) from NG (Jamo NG) and to mark aspirated plosives. This sign is often omitted from documents published on the web. To illustrate the use of that table, we will romanize the National Motto of South Korea 널리 인간을 이롭게 하라 Bring benefit to all people using the McCune-Reischauer System. Note that in the combination GAN-EUL (간을), the second hangul starts with an empty lead, thus the transcription value for the N must be taken from the first column of the table. Likewise, in I-ROB (이롭), the first hangul has no tail which means that the penultimate row applies. This is admittedly complicated, but not unreasonably so. Note: Some browsers have problems displaying this large table. Therefore, its display is now switched off. You may enable it at your own risk (this can take minutes for some versions of the Chrome browser, though Gecko and Safari need only a few seconds).
<urn:uuid:9c308c03-1d5a-4988-ae4d-6a0babdafa28>
CC-MAIN-2016-26
http://gernot-katzers-spice-pages.com/var/korean_hangul_unicode.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.802831
3,057
3.875
4
Human Ecology - Basic Concepts for Sustainable Development Environmental success stories from around the world with their lessons on how to turn from decline to restoration and sustainability. Author: Gerald G. Marten Publisher: Earthscan Publications Publication Date: November 2001, 256 pp. Paperback ISBN: 1853837148 Hardback SBN: 185383713X Chapter 11 - Sustainable Human - Ecosystem Interaction - Human social institutions and sustainable use of common property resources - Coexistence of urban ecosystems with nature - Resilience and sustainable development - Adaptive development - Things to think about How can modern society embark on a course of ecologically sustainable development? Firstly, and most importantly, do not damage ecosystems. - Do not damage ecosystems to such an extent that they lose their ability to provide essential services. - Watch carefully for environmental or social side effects when using new technologies. - Do not overexploit fisheries, forests, watersheds, farm soils or other parts of ecosystems that provide essential renewable resources. Increase the use of renewable natural resources gradually, monitoring for damage to the resource. - Develop social institutions to protect common property resources from tragedy of the commons. - Follow the precautionary principle when using natural resources, disposing of wastes or interacting with ecosystems in any way. Secondly, do things nature’s way so that nature does as much of the work as possible. - Take advantage of nature’s self-organizing abilities, thereby reducing the human inputs needed to organize ecosystems. - Develop technologies that have low inputs because they are designed to let nature do the work. - Take advantage of natural positive and negative feedback loops instead of struggling against them. - Take advantage of natural cycles that use waste from one part of the ecosystem as a resource for another part of the ecosystem. - Organize agricultural and urban ecosystems to mimic natural strategies. For example, organize agriculture ecosystems as polycultures that resemble natural ecosystems in the same climatic region. Recycle manufactured goods in a ‘technical cycle’ that keeps the wastes from manufactured goods separate from biological cycles in the ecosystem. How can sustainable development be achieved in practice? This chapter starts with an important example - social institutions to prevent tragedy of the commons. It then examines the issue of coexistence of urban ecosystems with nature. The chapter concludes by exploring two essential and interrelated aspects of sustainable development: - Resilience - the ability of social systems and ecosystems to continue functioning despite severe and unexpected stresses. - Adaptive development - the ability of social systems to cope with change. Resilience and adaptive development are important because ecologically sustainable development is not simply a matter of harmonious equilibrium with the environment. Preventing damage to ecosystems is absolutely essential for sustainable development, but it is not enough. Human society is constantly changing, and so is the environment. Sustainable development requires a capacity to deal with change. Resilience and adaptive development are the key to attaining that capacity. Human Social Institutions and Sustainable Use of Common Property Resources How can we prevent tragedy of the commons? Where existing social institutions encourage tragedy of the commons by making the overexploitation of common property resources a rational choice for individuals, we need new institutions that make sustainable use the rational choice. Scientists have compared hundreds of societies around the world to discover what social institutions are associated with the sustainable use of resources, such as forests, fisheries, irrigation water and communal pastures. They have discovered that some societies are highly successful at preventing tragedy of the commons. While details are different in each instance, the successful cases all have the following themes in common. - Clear ownership and boundaries: group ownership of a clearly defined area provides the control that is necessary to prevent overexploitation. This is closed access. Territoriality is a common social institution that people use to define ownership and boundaries. Extended maritime jurisdictions that nations have declared for ownership of marine natural resources within 320 kilometres of their shores are modern examples of closed access for common property resources. - Commitment to the sustainable use of the resource: the owners of a common property resource must really want to use it on a sustainable basis. They must agree that: - individual use is damaging the resource; - cooperative use of the resource will reduce the risks of damage; - the future is important (ie, opportunities for their children and grandchildren are as important as their own short-term gains). It is best if the owners have a shared past, trust each other, expect a shared future and value their reputation in the community. It is easier if ethnic differences or economic status are not sources of conflict for the resource owners. - Agreement about rules for using the resource: everyone should have enough knowledge about the resource in order to understand the consequences of using it in different ways. Good rules require not only a thorough knowledge of the resource itself but also a knowledge of the behaviour of the people who are using it. Good rules are simple, so everyone knows what is expected, and good rules are fair. No one likes to sacrifice for the selfish gain of others. Good rules produce benefits that exceed the costs of cooperation, costs that include organizational overhead, the effort or expense necessary to make a group function. Good rules don’t waste people’s time or other valued resources. - Internal adaptive mechanisms: sooner or later, it is necessary to adapt the rules for using a common property resource because of changes in the social system or the ecosystem, changes that often originate from the ‘outside world’. The mechanisms for adapting rules should be simple and inexpensive. Changes should usually be incremental so that large mistakes are avoided. It is important to monitor carefully what happens after new rules are put into effect, so the group can decide whether to make further changes. Trial-and-error evolution of rules is an example of self-organization in the social system. - Enforcement of rules: people usually follow rules if they think that everyone else is following the rules too. The best way to prevent people from breaking the rules is by internal monitoring - resource users watch each other - supplemented by external monitoring, such as guards. Everyone must know that everyone else will know if he or she break the rules. Severe punishments are not necessary if infractions are likely to be detected. Social pressure and the embarrassment of being caught are sufficient deterrents. Punishments should be minimal because punishments are disruptive to a spirit of cooperation. - Conflict resolution: people sometimes have different perceptions about applying rules to particular situations. Conflict resolution should be simple, inexpensive and fair. - Minimum external interference: local autonomy - being able to function independently, without control by others - is important because external authority may impose decisions that are not appropriate for local conditions. One of the most frequent reasons for unsustainable use of common property resources is interference from government authorities or economic forces outside the area. ‘Outsiders’ may not care about local sustainability, and outsiders seldom know enough about the local situation to understand what rules will work under local conditions. An example of successful common-property resource use: coastal fisheries in Turkey Tragedy of the commons is a frequent problem when fishermen do not cooperate to prevent overfishing. Because many traditional fishing villages have territorial jurisdiction over the fishing areas near their village, they have the clear ownership that is essential for sustainable resource use. Once ownership is established, the key is the establishment of good rules to prevent overfishing. Coastal marine fishermen in Turkey provide an example of rules that work because they are adapted to local conditions. The fishery in Turkey has two important characteristics: - Some areas are better for fishing than others. - Some areas are better for fishing during particular times of the year because fish move to different waters during different times of the year. The fishermen have devised the following rules: - They use a map to divide the fishing area into sites that are equal in number to the number of fishermen (see Figure 11.1). They draw lots at the beginning of the fishing season to determine which site each fisherman will use on the first day of the season. - Each fisherman can fish at only his assigned site the first day. He can fish at only the next numbered site the next day, and he must move from one numbered site to another every day after that. Figure 11.1 - A system used to prevent tragedy of the commons in a coastal fishery in Turkey These rules are simple and therefore easily understood by everyone. They are also fair despite the complexities of good sites, poor sites and fish movements during the year. Every fisherman has an opportunity to fish good sites as well as poor ones. The rules are also easy to enforce. If they are broken, it is usually when fishermen fish good sites on days that are not their turn. This is easy to detect because fishermen almost always go to good sites on the days when they are allowed to fish them. As a consequence, legitimate users of good sites are there to make sure that other fishermen do not use the site. An example of successful common property resource use: traditional village forest management in Japan For more than 1000 years, forests in Japan were the main source of essential materials such as water, wood for construction, thatch for roofs, food for domesticated animals, organic fertilizer (decomposing leaves) for farm fields and firewood and charcoal for cooking and heating. The Japanese used their forests intensively, but they were able to prevent tragedy of the commons by managing their forests as a closed-access common property resource. The forest around each village belonged to that village. The village controlled who used the forest and how. Although agricultural land such as rice fields was in private ownership, the forest belonged to the village as a whole. Everyone agreed that common lands such as the forest should be managed to serve the long-term needs of the entire village. While every family in a village had a right to use the forest, rules for forest use were decided by a village council with representation from families having decision-making authority by virtue of land ownership, land use rights or taxpaying obligations. Rules were designed to: - limit the quantity of forest products that a family in the village could remove from the forest; - provide equal access for every family in the village, while preventing overexploitation of the forest by the village as a whole; - require as little effort as possible to implement and enforce; - accommodate the roles that each forest product had in the village economy; - fit with details of the local environment; The extended family household was the basic unit of access to the village forest, and each household was assigned specific dates during which it could remove wood or other materials. For most materials, there was no limit on the amount that each household could remove during its scheduled time. In many villages, a number of households were organized into groups called kumi. Each kumi was assigned a different section of the forest for its use. In order to ensure fairness, the assignment was rotated each year so that each kumi could use a different part of the forest. The way the rules worked can be illustrated by a typical procedure for removing animal fodder from the forest. Each household could send only one adult to cut the grass in its part of the forest on the scheduled day. Everyone in the same kumi formed a line to cut the grass in their part of the forest, and they could only start cutting after the temple bell sounded. They left the grass to dry after cutting. About a week later, two people from each household could go to the forest to tie the dried grass into bundles and place the bundles in piles of equal size (one pile for each household in the kumi). The piles were then distributed to all the households in each kumi by lottery. Each village developed its own way of enforcing the rules. Because people were allowed to remove materials from the forest only on specified dates, anyone seen in the forest during other times was obviously breaking the rules. Most villages hired guards (a prestigious job for young men), who patrolled the forest on horseback in groups of two. In some areas all the young men in the village served as guards on a rotational basis. In villages that did not use guards, any member of the village could report seeing someone in the forest at the wrong time. Each village had its own penalties for breaking the rules. The forest guards usually handled occasional violations in a quiet and simple manner. It was accepted practice for guards to demand a small payment of money or sake from the rule-breaker. If a violation was more serious, the guards confiscated the illegal harvest and any equipment or horses that the rule-breaker was using. Rule-breakers had to pay a fine to the village to recover their equipment or horses. The amount of a fine depended upon the seriousness of the offence, the willingness of the rule-breaker to make rapid amends and whether the rule-breaker had a history of violations. People sometimes broke the rules because they desperately needed material from the forest at a time during which they were not allowed to remove it. One effective strategy for breaking the rules was to send the family’s most beautiful daughter into the forest, because guards (being young men) were more lenient with young women. The punishment was not severe if people had a good reason for breaking the rules. For example, there is a story about a large number of villagers who entered the forest before the scheduled day to cut poles because they urgently needed the poles for vegetables on their farms. Otherwise the crop would be lost. These rule-breakers were given a light punishment because the village council realized that the date the council had scheduled for removing poles from the forest was too late. The rule-breakers were only required to make a small donation to the village school. The social institutions for managing village forests in Japan were developed and refined over centuries, reaching their peak during the Tokugawa period (1600 - 1867). The management was successful because it was local. Even though Japan had a feudal and in many ways authoritarian social system, detailed rules for forest use were not imposed from outside the villages. It is also significant that forest access was based on households, not individuals. The share of wood and other materials that a household could remove from the forest did not increase if the household increased in number, and large households could not divide into two households unless they received special permission from the village. As a consequence, every household had a strong incentive not to have too many children, and there was almost no increase in the Japanese population during the Tokugawa period. Japan’s traditional system of forest management began to decline during the years after the Meiji Restoration (1868), and it deteriorated substantially with land reform and other social, political and economic changes following World War II. Forests are still important as a source of water for household, agricultural and industrial use, but the role of forests changed as Japan became a highly urbanized society integrated with the global economy. The importance of forests as a source of essential materials declined as Japan met the same needs by importing fossil fuels for heating and cooking, timber from other countries for construction purposes and chemical fertilizers for farms. Large areas of forest are now cut each year to make way for urban expansion, and the remaining forests have become increasingly important as weekend recreation areas for large urban populations. The scale of sustainable common-property resource use Most of the known examples of the sustainable use of common property resources are local in scale. The local level of resource use has many advantages, including the following. - The resource is more uniform, and therefore easier to understand, when the scale is small. - Local people have a more thorough knowledge of the resource and therefore a stronger basis for knowing what rules will be effective. - Local people know each other well enough to have a foundation for trust. - Local people desire sustainable use because they have a stake in the future of local resources. An important question for human - ecosystem interaction is whether sustainable use of common property resources is possible on a large scale. So far, the experience with large-scale use has not been encouraging. Tragedy of the commons is typical for resources exploited by multinational corporations. Because large-scale resource use is a fact of life in today’s global economy, the development of viable international social institutions to prevent tragedy of the commons is a major challenge of our time. There are legitimate differences between local, regional, national and international interests when deciding on the use of natural resources. Government ownership of resources such as forested lands has been associated with sustainable management in some places but unsustainable management in others. Government administration has a general history of granting use rights for timber, livestock grazing or other resources to people with political influence at a price below the real value of the resource - and often without adequate attention to sustainable use. If large-scale control of resource use is unavoidable, it should be organized hierarchically so that national or global economic forces and government authorities do not exclude local participation. Coexistence of Urban Ecosystems with Nature Chapter 10 described the conflict between urbanization and sustainable human - ecosystem interaction. Cities depend upon agricultural ecosystems for food and other products. They rely on natural ecosystems for water, wood, recreation and other resources and services. However, despite this dependence, as cities grow they tend to displace or damage agricultural and natural ecosystems, thereby diminishing the environmental support systems upon which they depend. Cities expand over agricultural lands and natural areas; and even where they do not displace agricultural or natural ecosystems, excessive demands for the products of those ecosystems can lead to overexploitation and degradation. Cities also expand indirectly at the expense of natural ecosystems because displacement of agricultural ecosystems and increasing demands for agricultural products can stimulate agricultural expansion far from the city, displacing natural ecosystems there. Modern urban ecosystems can have a strong impact on distant natural ecosystems because their supply zones extend to so many parts of the world. Why do urban social systems show so little restraint in destroying or damaging the natural and agricultural ecosystems on which they depend? Part of the explanation (as discussed in Chapter 10) is alienation of urban society from nature, particularly if people have no contact with natural or agricultural ecosystems during childhood. The implications for design of urban landscapes are far reaching and profound. Urban landscapes that provide childhood experience with nature may be essential for an ecologically sustainable society. Until recently, virtually all cities contained a landscape mosaic of urban, agricultural and natural ecosystems that provided opportunities for direct contact with nature within walking distance of most people’s homes. Unfortunately, many large cities today have become ‘concrete jungles’ in which this opportunity is no longer available. The result may be a positive feedback loop between an increasingly urbanized society and cities with fewer opportunities for childhood nature experience, creating adults whose lack of emotional connection with nature does not constrain them from damaging their city’s environmental support system. Ensuring that natural ecosystems are retained as part of urban landscapes - or finding ways to restore green areas where they have already been lost - should be high on the agenda of urban communities. Is it feasible for natural ecosystems to survive in close contact with modern urban ecosystems? It can happen if the people in the area are concerned and active enough to ensure that the natural ecosystems are not polluted, destroyed or excessively disrupted. The coexistence of cities and chaparral in southern California provides an example. Chaparral ecosystems are characterized by a dense growth of tall shrubs and small trees about 2 - 3 metres in height. They have a rich assortment of birds and other small animals as well as larger animals such as deer, mountain lions (puma), bobcats (lynx), coyotes and foxes. The residential areas of some cities in southern California have convoluted edges that place a large number of homes in close proximity to natural chaparral ecosystems (see Figure 11.2a). These are sometimes shaped by foothills that form part of the terrain around the cities. The dense chaparral vegetation restricts human activity to paths and roads, protecting the natural ecosystem from excessive human impacts while providing opportunities for hiking, mountain biking and other relatively unobtrusive activities. Figure 11.2 - Landscape mosaics of urban and natural ecosystems The size of a natural ecosystem is critical for maintaining its integrity in an urban area because in order to be fully functional a natural ecosystem must be large enough to provide habitat for all of its biological community. Large predators require territories of several square kilometres or more to provide them with food; they cannot survive if the ecosystem is too small. One way to ensure that natural ecosystems are large enough is to have natural ecosystem corridors that connect smaller patches together (Figure 11.2b). Santa Monica mountains Even if urban and natural ecosystems can coexist next to each other, natural ecosystems will be lost if they are simply cleared away by urban expansion. The recent history of the Santa Monica mountains, a natural area of 900 square kilometres at the western edge of Los Angeles, illustrates how social institutions can control urban expansion over natural ecosystems. The Santa Monica mountains have a scattering of houses on a landscape mosaic that features chaparral on the hillsides with oak woodland ecosystems and temporary streams in the canyon bottoms. By the 1950s it was technically and economically feasible to level the granite hills in this area for residential development using earth-moving equipment originally developed during World War II for constructing airplane landing strips on mountainous Pacific islands. Mountains near cities often have a high priority for public ownership and protection because they are the city’s source of water, but Los Angeles brings its water from rivers hundreds of miles away. In the mid-1960s more than 98 per cent of the land in the Santa Monica mountains was in private ownership, much of it in large parcels owned by land holding companies that intended to level it to construct thousands of houses. The rapid growth of Los Angeles during the 1950s and 1960s was extending high-density residential subdivisions into the mountains at a rate that threatened to cover much of the land with houses within a few decades. The expansion of high-density housing into the Santa Monica mountains was controlled because of citizen initiatives that stimulated local, state and national governments to take decisive action to protect the natural landscape in the area. Starting in late 1960s, a highly organized, aggressive and persistent coalition of citizens groups in the western part of Los Angeles adjacent to the mountains, homeowners associations in the mountains themselves and environmental organizations such as the Sierra Club lobbied all levels of government for protection of the mountains. Energetic support from a few particularly sympathetic representatives in the Los Angeles city council, the California state legislature, and the United States Congress led to action at all three levels of government by the end of the 1970s. Through negotiation and land condemnation, the state acquired 45 square kilometres of privately owned, undeveloped land adjacent to areas where residential developments at the edge of Los Angeles were expanding rapidly into the mountains. In 1974 this newly acquired land became Topanga State Park, in which residential and commercial development and highway construction were completely prohibited. In 1978 the United States government’s National Park Service established the Santa Monica Mountains National Recreation Area to promote protection of nature throughout the mountains. In 1979 the state of California submitted a Comprehensive Plan for all of the Santa Monica mountains to the United States government. The state legislature created the Santa Monica Mountains Conservancy to implement the Comprehensive Plan, and all city and county governments in the area, while not legally bound to follow the plan, agreed to follow it in principle. The Santa Monica Mountains Conservancy and the National Park Service were both charged with land acquisition, protection of nature on land they acquired, development and maintenance of recreational facilities such as hiking trails, and representation of the Comprehensive Plan at local government hearings regarding development of privately owned lands. The two levels of government have pursued their overlapping missions with somewhat different institutional strengths, priorities and management styles - the overlap and differences enabling them to accomplish together what neither could have accomplished alone. About 40 per cent of the land in the Santa Monica mountains is now under national or state ownership, and an additional 15 per cent is targeted for eventual acquisition. However, the total area with natural vegetation is diminishing gradually as the privately owned land is developed for residential housing or other remunerative purposes such as vineyards. Active involvement to promote compatible use of private lands has been a priority for the state and national agencies because private land use can have such far-reaching effects on the ecological health of nearby land under government protection. The process of influencing private land use has been overwhelmingly complex, with results at times successful and at others disappointing. Even if much of the privately owned land is eventually used for urban or agricultural development, the long-term commitment of state and national governments, local residents, recreational users and environmentalists to protecting the land ensures that the region’s landscape will retain a substantial representation of natural ecosystems for generations to come. Resilience and Sustainable Development Resilience is the ability of an ecosystem or social system to continue functioning despite occasional and severe disturbance (see Figure 11.3). To understand resilience, imagine a rubber band and a piece of string tied in a loop. If the rubber band is stretched to twice its normal size, it returns to normal once the pressure is released. The rubber band is resilient because it can return quickly to its normal shape after being changed by a severe stress. The loop of string is very different from the rubber band because it breaks if stretched beyond its normal size and is therefore not resilient. Buildings are resilient if they are designed to withstand severe earthquakes. Social systems and ecosystems are resilient if they survive severe disturbances. Figure 11.3 - Stability domain diagrams comparing high and low resilience Resilient ecosystems are the backbone of a sustainable environmental support system. A key to resilience is anticipating how things can go wrong and preparing for the worst. There are many ways to achieve resilience: - Redundancy: duplication and diversification of function provide backups for when things go wrong. This principle is most conspicuous in the design of modern spacecraft, which have extensive backup systems to replace parts of the spacecraft that fail to function properly. Redundancy is prominent in natural ecosystems. The presence of species with overlapping ecological roles and niches contributes to the resilience of ecosystems. - Low dependence on human inputs: sustainable human - ecosystem interaction is associated with ecosystems that have small human inputs. Nature does most of the work. Large human inputs reduce resilience because sooner or later something will happen that interferes with a society’s ability to provide the inputs. The collapse of Middle Eastern civilizations when irrigation ditches were clogged with sediment is an example (see Chapter 10). Resilience is desirable, but it can conflict with other social objectives that are equally beneficial. Efficiency, for example, has become crucial for modern commercial enterprises because low operating costs are essential for survival. Economic efficiency and resilience are often in conflict because the redundancy that reinforces resilience requires extra cost and effort. Economic pressures to reduce resilience are increasing as competition tightens in the global economy. Tradeoff between stability and resilience Stability implies constancy - things staying more or less the same. Stability is desirable if it reduces unwanted fluctuations. For example, an income is stable if there is a paycheque every month. It is unstable if a person does not receive a paycheque on a regular basis. Figure 11.3 shows how greater stability can be associated with less resilience. Ecosystems and social systems that seldom change are more easily shifted to a different stability domain when external disturbances force them to accommodate change beyond their limited capacity. Modern technology and large inputs of fossil fuel energy have given contemporary society the ability to build a high degree of stability into most people’s lives by insulating them from fluctuations in their environment. Heating and air conditioning allow us to live and work in buildings with nearly the same temperature year round. The modern system of food production and distribution stocks supermarkets with an abundance of food at all times. The weakness of the system is dependence upon large energy inputs for heating and cooling buildings or producing and transporting food. Large inputs can increase stability but reduce resilience. A common source of conflict between stability and resilience is the loss of resilience when a system is so stable that it does not exercise its ability to withstand stress. This is well illustrated by the disaster that accompanied a sudden fuel oil shortage in north-eastern United States some years ago. Americans normally enjoy an abundance of energy to comfortably heat their homes. Many people were not prepared when the supply of fuel oil broke down at a time of unusually severe winter weather. The result was an astonishing number of deaths from cold exposure when furnaces ran out of fuel. Some elderly people who seldom went outside during severe weather did not have appropriate clothing for low temperatures. Some people lacked a social support system to deal with this kind of emergency. Floodplains provide another example of the loss of resilience when resilience is not exercised. A large percentage of the world’s human population lives on floodplains because of the fertile soil, abundant supply of water and high capacity for food production. River water spreads over a floodplain for a short period each year, depositing a thin layer of mud that keeps the soil deep, fertile and highly productive for agriculture. However, floodplains also have an important drawback - floods can damage crops, houses and other property. During most years floods are mild and do not cause much damage, but sometimes flooding can be severe. Floodplain societies typically structure their agriculture and urban ecosystems to minimize flood damage because their social systems have coevolved with the floodplain ecosystem. They grow their crops in areas that will not be badly flooded. If they cultivate rice, they use a special variety of rice with a stem long enough to hold the rice grains above the water so that the crop is not damaged. They build their houses above the ground so that floodwater flows under their houses; they store food in safe places so that they do not run out if a flood damages their crops; and they have social institutions to help flood victims deal with the damage that occurs when a flood is unusually severe. Floods can cause some damage despite these adaptations, but the damage is seldom very serious. It is natural for people to desire no damage at all. In recent years, hydroelectric dams constructed to generate electricity have also helped to prevent floods. Other flood control measures such as levees, which increase the heights of riverbanks, keep water from spreading out of a river and over the surrounding floodplain. Flood control has reduced flood damage in the short term, but it has also made human - ecosystem interaction less resilient. Without floods, floodplain ecosystems gradually deteriorate because new soil is no longer deposited each year to maintain soil fertility. Farmers compensate for a reduction in soil fertility by applying larger quantities of chemical fertilizers, which reduces resilience because the agricultural ecosystems become dependent upon substantial fertilizer inputs. Agricultural production could decline drastically if fertilizer prices increase in the future, a real possibility because fertilizer comes from non-renewable resources. Flood control can reduce the resilience of social system - ecosystem interaction in another way - the loss of social institutions and technologies that protect people and property from flood damage. A society with flood control ‘forgets’ how to structure its agricultural and urban ecosystems to withstand floods. Crops are grown in places where a flood could damage them, new houses are built at ground level on the floodplain, and other social institutions that reduce the impact of severe floods gradually go out of use. However, sooner or later - perhaps within 20 - 50 years - there is a year with so much rain that the river overflows the dams or levees. Despite the flood control, there is a flood with massive damage because the social system and the agricultural and urban ecosystems are no longer structured to reduce flood damage. The interaction of people with their floodplain ecosystem has lost its resilience - the ability to withstand severe floods - because the stability provided by flood control did not subject the social system to the smaller stress of annual floods. The conflict between stability and resilience is important for ecosystems and social systems in many other ways. The example of forest fire protection in Chapter 6 was about the conflict between stability and resilience. Forest managers increased stability by putting out every fire, but they reduced resilience because continuous protection from small fires increased the vulnerability of forests to large-scale destructive fires. The use of chemical pesticides to control agricultural pests has increased stability but reduced resilience. Traditional and organic farmers do not use pesticides to reduce pest insects that eat the crops; instead, they rely on natural control by predatory insects that eat the pest insects. Natural control is less than perfect because predatory insects do not eliminate pest insects completely; predatory insects and pest insects coexist together in the same ecosystem. Most of the time the crop damage in traditional or organic agriculture is moderate, typically 15 per cent to 20 per cent of the crop, because predatory insects prevent pest insect populations from increasing enough to inflict serious damage. However, sometimes there is more damage. Modern farmers seek less insect damage and greater stability by using chemical insecticides to kill as many insects as possible. Unfortunately, the insecticides kill predatory insects as well as pest insects, so the natural control of pest insects by predators is lost. This makes farmers highly dependent upon insecticides. Without natural control, pest insect populations can increase to devastating numbers when insecticides are not in use. The situation becomes worse when pest insects evolve physiological resistance to insecticides. Farmers are forced to use larger quantities of insecticides, and a positive feedback loop - a ‘pesticide trap’ - is set in motion with more insecticides and more resistance. While insecticides can make agricultural production more stable as long as there is no insecticide resistance, resilience is reduced because insect damage can be devastating when resistance develops in agricultural ecosystems that lack predators to provide natural control. With some crops such as cotton the spiral of increasing insecticide use can continue until the cost of insecticides is so great that farmers can no longer afford to grow the crop. Modern medicine has a similar problem with the use of drugs to control diseases such as malaria and tuberculosis. While drugs provide obvious benefits, their large-scale use can lead to drug-resistant strains of disease organisms in exactly the same way that large-scale use of insecticides leads to resistance in insects. The stability (low level of disease) achieved by modern medicine is accompanied by a loss of resilience due to dependence on drugs. The risk of epidemics with drug-resistant strains can be particularly serious when: - the human population has lost its immunity to the disease; - social institutions that provided other means of preventing the disease have been abandoned because they did not seem necessary. The most serious conflict between stability and resilience concerns food security. Although wealthy countries have an abundant and stable food supply, food storage has declined drastically during the past decade. The abundant food supply lulls wealthy societies into an unrealistic sense of security. At the same time that modern science and economic development are increasing global food production, environmental deterioration and dwindling water supplies are reducing the potential. There are also possibilities of sudden and unexpected agricultural failure due to climate shifts induced by global warming. Nations such as Japan, which imports 60 per cent of its food, are particularly vulnerable. The significance of stability and resilience for sustainable development can be expressed in terms of complex systems cycles. Harmony with nature by doing things ‘nature’s way’ and preventing damage to the Earth’s environmental support system is important for sustainable development; but sustainable development is not merely static equilibrium with the environment. Sustainable development is more than making the world function smoothly with no difficulties. Natural fluctuations and natural disasters are an unavoidable part of life. Design for resilience is an essential part of sustainable development. The key to resilience is the ability to reorganize when things go wrong, making dissolution as brief and harmless as possible. What should we do about the conflict between stability and resilience? Both stability and resilience are desirable. It is best to have a balance. The social system should structure its interaction with ecosystems so neither stability nor resilience is overemphasized at the expense of the other. This means using resilient strategies to achieve an acceptable level of stability. Adaptive development is the institutional capacity to cope with change. It can make a major contribution to ecologically sustainable development by changing some parts of the social system so that social system and ecosystem function together in a healthier manner. Adaptive development is about survival and quality of life. Adaptive development builds resilience into human - ecosystem interaction. It does not simply react to problems; it anticipates problems or detects them in early stages, taking measures to deal with them before they become serious. Adaptive development provides a way to work towards sustainable development while simultaneously strengthening the capacity to cope with serious problems that will inevitably arise if sustainable development is not achieved. The two basic elements of adaptive development are: 1) regular assessment of what is happening in the ecosystem; and 2) taking corrective action. The key to ecological assessment is the ability to perceive what is really occuring within ecosystems. The key to corrective action is a truly functional community. Adaptive development requires the organization, commitment, effort and courage at all levels of society to identify necessary changes and make them happen. A society examines its values, perceptions, social institutions and technologies and modifies them as necessary. What values are important for ecologically adaptive development? An example is the significance we place on material consumption for the quality of our lives. We all need food, clothing and shelter; but how much more do we need? The scale of our material consumption has a critical impact on sustainable development because of the demands that consumption places on ecosystems. When people think deeply about what is most important to them, they usually identify social and emotional needs relating to family, friends and freedom from stress. Modern society has amplified material consumption in the belief that more possessions will help to meet these basic needs, a belief reinforced by advertising that emphasizes how various products can contribute to sexual gratification, friendship, relaxation, or other emotional needs. The result is a spiral of increasing consumption, intended to satisfy our basic needs but often failing to do so. Modern values about material possessions are connected to our perception that economic growth is essential for a good life. Political leaders tell us that economic growth is their highest priority, while ‘experts’ addressing us through the mass media continually reinforce our belief as a society that a high level of consumption (consumer confidence) is essential for full employment and a healthy economy. The relation of economic growth to sustainable development is a major issue of our time because continual expansion of material consumption is ecologically impossible. What kind of economic growth is sustainable? How can we maintain a healthy economy and satisfy our human needs without placing excessive demands on ecosystems? Adaptive development maintains a public dialogue on key issues such as these and holds political leaders accountable for dealing with them. Adaptive development for a sustainable society is caring about others - caring about community, caring about future generations and caring about the non-human inhabitants of the Earth. It requires real democracy and social justice because decisions and actions that value the future require full community participation. When a small number of rich or politically powerful people control the use of natural resources or other ecosystem services, they often do it for their own short-term economic gain. Societies are limited in their ability to respond adaptively if a few privileged people have the power to obstruct change whenever change threatens their privilege. Strong dynamic local communities are at the core of adaptive development. Democracy has the fullest participation, and functions best, at the local level. All human interaction with the environment is ultimately local. Consider the exploitation of forests. Although deforestation is driven by large-scale social processes such as urban and agricultural expansion, international markets for forest products and the organization of commerce by multinational corporations, the trees are actually felled by the man with the axe or the bulldozer. When local people control their own resources, no tree can be destroyed unless local people allow it to happen. The same is true for cities that grow into impersonal concrete jungles. Local citizens can passively allow investors to change their urban landscape in ways that are profitable. Or they can control the growth of their cities by allowing only development that fits their vision of a humane and liveable city - a vision that usually includes a diverse and nurturing landscape with natural areas, parks and other spaces for community activity. Crises involving concrete and compelling local issues can stimulate communities into action that eventually enables them to control their destiny on a broader front. While details can vary enormously, the following themes are illustrative of long-range action: - Reversing undesirable trends: local communities take stock of their current social or ecological condition, as well as changes during recent decades. They strengthen support systems for the elderly, neighbourhood safety, constructive recreational activities for children, or whatever is most significant in their particular situation. They examine the balance of natural, agriculture and urban ecosystems within their city and in the surrounding regional landscape. If the landscape mosaic is out of balance or changing in that direction, they undertake initiatives to restore the balance. - Anticipating disaster: communities prepare for earthquakes, floods, drought, food security or whatever else is appropriate at their location. Part of the preparation is for emergency response, but part consists of measures taken well in advance to reduce the severity of a disaster or the likelihood that it will even occur. For example, farmers can develop drought-resistant methods of cultivation in regions where droughts may increase in frequency due to global warming. Communities can reinforce local self-sufficiency in food production by forming consumer cooperatives in order to purchase local agricultural produce, establishing markets for local farmers in the process. It is not necessary for community organization to focus on the environment in order to contribute to ecologically sustainable development. Community organization for any purpose will create the capacity to identify environmental concerns and act upon them. The first and crucial step is forming a vision of the kind of life that the community desires now and in the future - a vision that embraces the social and ecological environment. This kind of community vision is sensitive to the landscape. It is sensitive to possible problems in the future. Are food security or future water supply a concern? The vision addresses issues of dependence versus autonomy vis-à-vis the surrounding world. In what ways would greater or less self-sufficiency benefit the community? What are the significant needs that only the local community can deal with? Acting on a community vision requires experimentation. The ability to clearly perceive and articulate alternative choices, and the creativity and imagination to form new possibilities, are essential. Adaptive development means experimenting with possibilities in ways that allow them to be expanded if successful or discarded if not. In today’s world of global communications, adaptive development is networking to help others while learning from their experiences. It is stimulating neighbouring communities, as well as communities in distant lands, to become more sustainable and helping them to do so. How can this happen? Much of the answer lies with environmental and community education. Modern education compels us to spend thousands of hours acquiring skills for professional success, but our ecological and community skills are limited. Ecological and community education is learning to form community visions and to think clearly about policy alternatives. It is the ability to think strategically about local ecosystems in terms of the whole system and connections among its parts - including connections between social systems and ecosystems. Is adaptive development a Utopian dream? In fact, adaptive development is not new. Most adaptive development comprises common sense that has guided functional and sustainable communities for thousands of years. Adaptive development is not exclusively about the environment. It touches every way in which a society makes itself truly viable. Of course, local community action has its costs in time, attention and the effort to deal with interpersonal relations. Many people feel that they lack the time or prefer to avoid the hassle, but once they enjoy the social rewards of doing useful things with neighbours, they usually find it more worthwhile. Community gardening is one way to promote community solidarity while incorporating an ecological perspective. Most people enjoy gardening with family and neighbours. They value the fresh food that a garden provides, and gardening puts them in touch with ecosytems in numerous ways. Organic gardening has particular potential to increase ecological awareness. Can adaptive development for an ecologically sustainable society really happen? There are reasons for qualified optimism. Corporations are adapting to the environmental awareness of their customers by developing environmentally friendly products. The private sector is responding to environmental problems with new environmental technologies. Perhaps even more significant, an increasing number of corporations have added sustainable development as an institutional goal. They realize that future business success will depend upon the ecological health of the planet. An example of the private sector’s capacity for adaptation in the realm of technology is its collaboration with government to deal with depletion of the ozone layer. The ‘ozone story’ began with the discovery that chlorofluorocarbons (CFCs), used primarily for refrigeration, were breaking down the ozone layer, which protects the planet from ultraviolet radiation. Within a few years there were international agreements to replace CFCs with environmentally friendly chemicals, and industry followed through to implement the agreements. Similar stories appear to be unfolding in the energy industry as it responds to the Earth’s limited supply of petroleum and natural gas. The use of hydrogen for energy storage and transport is in rapid development, and alternative energy technologies such as solar cells and windmills are growing rapidly. Such developments are positive but the ozone layer is not yet restored to health, and dependence on petroleum and gas is far from resolution. Alarmingly, some major industries continue to obstruct new environmental products and technologies that conflict with their existing markets. There is less basis for optimism about adaptation that conflicts with the basic foundations of modern society. Global warming is an example. Reductions in greenhouse gas emissions, particularly carbon dioxide, strike at the heart of modern society’s dependence on massive quantities of fossil fuel energy. In 1997 the Kyoto Protocol set an international goal to reduce the carbon dioxide emissions of industrialized nations by 5 per cent during the subsequent ten years. Some nations promoted this goal with enthusiasm while others accepted it with reluctance. Developing countries, including several large industrializing nations responsible for massive carbon dioxide emissions, refused to pledge any restriction on their emissions. Although the Kyoto Protocol is significant as a first step toward international cooperation on global warming, the actions specified by the Kyoto Protocol are far too modest to be of practical significance. Computer simulation studies have indicated that if every nation follows the Kyoto Protocol completely, greenhouse gases will continue to accumulate in the atmosphere, and the increase in average global temperature during the next fifty years will be reduced by less than 0.1ºC compared to the increase expected with no Kyoto Protocol - almost no difference. The computer studies indicate that full compliance with the Kyoto Protocol would reduce the number of people facing an added risk of coastal flooding from rising seas during the next fifty years by only a few per cent. There would be virtually no impact on regional shifts in climate. Unfortunately, neither industrialized nations nor industrializing nations are willing to consider seriously the major reduction in carbon dioxide emissions that would be necessary for a genuine impact on global warming. What can governments do for adaptive development? Of course they should face up to realities such as global warming and do their best to deal with environmental problems at regional, national and international levels. Equally important, governments should educate their citizens about environmental issues and provide educational and material assistance to strengthen the capacity of local communities to follow a path of adaptive development. Local communities should insist that governments assist them to develop this capacity. Governments can encourage and assist local communities to set up environmental districts similar in organization to the local school districts in many countries. Non-governmental organizations (NGOs) have played a crucial role in developing a worldwide dialogue on environmental issues. The United Nations Conference on Environment and Development (known as the Earth Summit) in 1992 brought together governments, NGOs, industry and others. Though the process has not been smooth, such forums reinforce the interconnectedness of global and local issues and the need for collaboration at all levels. NGOs can serve as catalysts for adaptive development. While recognizing that non-governmental organizations vary immensely in their organization and mission, one brief example will illustrate the possibilities. Nature conservation organizations have discovered that their efforts to protect natural ecosystems as reserves are frequently undermined by human activities in the surrounding area - including activities that are essential to people’s livelihoods. In order to adapt to this, some conservation organizations are setting up businesses to learn and demonstrate how to pursue economic activities in ways that protect natural ecosystems. For example, they have embarked on joint ventures with timber companies to manage forests in ways that are not only sustainable for wood production but also maintain natural forest ecosystems as part of the landscape mosaic. They have developed cooperatives with coral reef fishermen to ensure a sustainable supply of fish while maintaining the unique biological diversity of the reefs. Some have formed cooperatives with local farmers to make farming compatible with natural ecosystems in the same watershed. Where silt from soil erosion threatens estuaries or other natural ecosystems, conservationist - farmer joint venture companies are providing the technical support and marketing to enable farmers to secure a satisfactory income with low-erosion crops and cultivation practices. Characteristics of social environments include: - education (for example, memorizing versus learning to think); - watching television and playing video games versus playing with friends outdoors; - safe neighbourhoods versus fear of street crime; - people working near home versus commuting long distances to work; - women having equal/unequal opportunities for professional careers. Characteristics of urban environments include: - air quality; - housing (for example, high-rise apartments versus single-family dwellings); - parks and natural areas in cities; - places for community activities. Characteristics of rural environments include: - recreational opportunities; - forests as sources of clean water, timber, biological diversity, recreation, etc; - food supply and food security. Characteristics of international environments include: - sources of food and natural resources; - travel and recreational opportunities; - impacts of the global pop culture; - impacts of the global economy. Box 11.1 - Examples of environments and their characteristics What can individuals do? They can bring a human ecology perspective and commitment to sustainable development to the workplace, and they can organize details of their daily lives to be in greater harmony with the environment. Equally essential, individuals can work for the viability of their local community - helping to form a community vision; assessing the ecological status of their region and changes in the local landscape; contributing to community support systems; and, in general, building long-term ecological health and resilience into the community and its landscape. Individuals can teach their neighbours about sustainable development and awaken their desire to participate in setting an ecologically sustainable course for their future. As the eminent anthropologist Margaret Mead once said, ‘Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever does.’ Things to Think About - What are ways that you build resilience into your personal life? What are ways that your society achieves resilience? What are ways that the resilience of your local community, your nation or the world is weak? What can be done to improve resilience in those instances? - Think of examples of the conflict between stability and resilience in your personal life. Think of examples in the society in which you live. Are stability and resilience in balance? What can be done to achieve a better balance? - Agreement about good rules for using a common property resource is essential for sustainable use of the resource. Think of concrete examples of social institutions that prevent Tragedy of the Commons, using information from newspaper or magazine articles or your own personal knowledge. Then think about your society’s social institutions for using resources such as petroleum, minerals, water, and the land. Are they effective for sustainable use? Are there ways that you think they could be improved from this perspective? - For what kinds of public concerns is your local community (or city) organized? Are some of them environmental? Are there environmental concerns that are not addressed even though you think they should be? How do you think the community can be educated so they establish appropriate priorities for their interactions with ecosystems. - What are the roles of city, state (prefectural, provincial) and national government for shaping human/ecosystem interaction in your country? What are the roles of corporations? What can citizens do to stimulate governments and corporations to follow more sustainable ecologically policies. - Strategic planning is a way to initiate constructive action in support of ecologically sustainable development for your community. Brainstorm with some friends to discover your opinions regarding the following essential steps for strategic planning: - An ideal vision of your community twenty years from now. What kind of life do you want for your children and grandchildren? What kind of environment do you want for your children and grandchildren so they have an opportunity for that kind of life? A vision can include things that you want to keep the same and things that you want to improve. “Environment” can have a broad meaning, including the social and urban environment as well as natural and agricultural ecosystems (Table 11.1) - Obstacles to realizing the vision. What environmental problems could prevent the kind of life that you have outlined in your vision? You can include problems that exist now and need improvement (e.g., air quality in cities). that are not a problem now, but there is a trend that could create a problem in the future. (For example, the present destruction of forests or farmland by urban expansion does not yet have serious consequences in some areas, but it could if it continues for long.) You can also think of things that are not a problem now but could suddenly become a problem at some time in the future (e.g., food security). - Actions to overcome the obstacles. Decide what your community can do about the environmental problems that you identified in “obstacles to realizing the vision”. What can individuals do? How can you start? What are the institutional obstacles to successful action? How can institutional obstacles be overcome? Can you do it yourselves or do you need the cooperation of local or national governments, the private sector, or non-governmental organizations?
<urn:uuid:f4c1c8e3-64c1-48a2-b460-904feac6a218>
CC-MAIN-2016-26
http://gerrymarten.com/human-ecology/chapter11.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950616
11,095
3.703125
4
Maybe you're an engineer, or maybe you work with a designer. Or maybe you think that design is about using Photoshop. Maybe you just care about design. If any of these are true, then there's a lot you can learn by reading and understanding these books. They'll help you understand how to solve problems for people out there in the real world. These books cover the basics - the foundations of designing for people. No fancy Photoshop tricks. Nothing much about visual design. Instead these books are about understanding what people need when they interact with the things you make. There is a lifetime of learning in these books. So get to it. If you're making an app, creating a new website, or designing a chair, start learning how to design and make better things. The best book about designing for people. If you've ever been frustrated by a confusing parking meter, or a door than opens the wrong way then this book explains why. This book explains how the design of the things we use everyday can be improved. Simple, clear, comprehensive. This is a solid foundation for problem solving across all areas of design. From hierarchy and Hick's law to mental models and mapping, this text covers many of the major design principles. The book itself is laid out with one principle per page, with an accompanying page of examples. A great practical reference. A wonderful, lyrical book about the art and science of typography. If you had to choose a single book on typography, this is it. This is from a man who loves letters and language. Incredible treatments on type. Sections on rhythm & proportion, structural forms. harmony, shaping the page, combing and choosing type. Phi for layouts, type as a music scale. History of typography. Type foundries and type designers. Specimens. And more. A beautiful book. You'll learn more every time you go back. This book is here because it is about using form and structure as a system to help you solve problems. It's more a practical framework for solving problems than just a book about grids. That said, this is also the definitive early work on grid design. Muller-Brockmann was one of the key pioneers of grid design and the swiss / international movement. It includes considerations of grid and structure: baseline grids, measuring systems, units, margins, faces, construction of the type area and grid and numerous examples of type and graphics in various grid combinations. This is the kind of book that you can return to again and again, and always find something new. Breaking away from the other books a little, this is a monograph about Rams and about principles of good design. As little design as possible… There are early sketches and prototypes. There are incredible behind-the-scenes photos. There are stories about Rams and his team. There's a foreword by Ive, and the ten principles true for all good design are laid out in one place. Incredibly inspirational book. You should also check out Less is More which is equally great.
<urn:uuid:e0a755c7-2d93-4070-80b9-aec9a624d435>
CC-MAIN-2016-26
http://gizmodo.com/5976756/five-books-every-aspiring-designer-must-read?tag=books
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957233
627
2.703125
3
Fast internet is fast. Google Fiber's gigabit connections? That's like driving a sports car compared to the go-cart-speed connection that's probably in your house. But new technology from IBM opens the door for connections that are beyond fast. Comparatively, it's like flying a fighter jet. IBM researchers in Switzerland just unveiled the prototype for an energy efficient analog-to-digital converter (ADC) that enables connections as fast as 400 gigabits per second. That's 400 times faster than Google Fiber and about 5,000 times faster than the average U.S. connection. That's fast enough to download a two-hour-long, 4K ultra high definition movie in mere seconds. In short, that's incomprehensibly fast. The ADC chip itself was actually built for loftier purposes than downloading episodes of Planet Earth, though. It's actually bound for the Square Kilometer Array in Australia and South Africa to help us peer hundreds of millions of light years into space, hopefully to give us a better idea of what the universe was like around the time of the Big Bang. This massive radio telescope will devour data, too. It's expected to gather over an exabyte every day when it's finished in 2024. That's over 100 billion gigabytes. Believe it or not, 400 gigabit isn't even the fastest connection the world has seen. For that you'll have to go to the United Kingdom where researchers recently developed 1.4 terabit internet using commercial-grade hardware. That's warp speed. [ZDNet] Image via IBM
<urn:uuid:54761286-3ec8-40c9-b61c-198de2e7bc2f>
CC-MAIN-2016-26
http://gizmodo.com/a-tiny-new-chip-promises-internet-400-times-faster-than-1521523614
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942042
324
2.703125
3
Monday, August 29, 2011 Putting Culture on the Map In the UK tourist signs have a brown background and are often referred to as 'brown signs'. Their purpose is to direct people to tourist attractions, such as castles, museums or historical buildings. Brown signs are the responsibility of local authorities and therefore there is no central record of all the country's brown signs. Follow the Brown Signs is a website dedicated to tracking down (and mapping) all the humble brown signs in the UK. It is possible to search for brown signs by address or postcode and view a Google Map of all the signs around that location. There are 93 different types of brown signs, signifying such diverse categories as good brass rubbing locations and 'heavy horse' centres. Follow the Brown Signs lets you search for brown sign locations by category. As well as providing a little history about each category you can also view each of the 93 different types of brown sign on its own individual map. The UK's Blue Plaque scheme is a way of commemorating the lives of famous residents of the country. Blue circular signs are erected on houses to indicate that someone of note was born or once lived there. The PlaqueGuide is a Google Map of the UK's blue plaque houses. The map uses blue circular map markers to show the location of houses with plaques. You can find out who the plaque is for by just mousing over a marker. If you click on a marker you can view a Street View of the plaque and read a Wikipedia article about the individual commemorated by the plaque. The PlaqueGuide is crowdsourced, so anyone can add information about the location of a blue plaque.
<urn:uuid:7da8ab53-c9bb-4ab3-be01-69728c716fed>
CC-MAIN-2016-26
http://googlemapsmania.blogspot.com/2011/08/putting-culture-on-map.html?showComment=1314720776647
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946291
341
2.875
3
We are kicking into full gear with our early childhood themes that the Early Childhood Education Team will feature this year! This week is All About Me. I’m sharing a playful phonological awareness activity to build rhyming skills. Just a reminder that phonological awareness activities are oral activities. They are meant to strengthen hearing and speaking of language and phonemes. Full Disclosure: This post contains affiliate links. Why We Chose This Activity My boys love, love, love to rhyme. The book that really sparked their interest in rhyming was the Rhyming Dust Bunniesby Jan Thomas. It’s one of our all time favorite books. We play rhyming games all the time. The following activity came from combining their love of rhyming and drawing. 1. We picked up plastic sleeve pockets from the dollar bins at Target. We just slid a blank piece of white paper inside to make our “dry erase board”. You can also use a traditional dry erase board or window too. 2. Draw a person on the board. 3. Have a little sponge or cloth ready to be the eraser. Time to Play Read each rhyme to the child. The child gets to erase the part of the person that completes each rhyme. You can download the free rhyme sheet to use. Why is there a bear in a chair sitting in my ________? (hair) I think that bug just might land right in my ________. (hand) Did someone put ants in my _________? (pants) Oh dear oh dear, there is a bee near my ______. (ear) Bye, bye little flies, it’s time for me to close my _____. (eyes) Sound the alarm, there is a monkey on my _____. (arm) Get the hose and clean out my _____. (nose) Did you hear the news? I have two brand new _____ .(shoes) Please brush this sand off my ____. (hand) I can’t hear you. I think I have something stuck in my ____. (ear) Pigs live on a farm and a duck just landed on my ____. (arm) A breeze just blew in from the south blowing some dust right into my ______(mouth) You may need to repeat the rhyme more than one time and emphasize the word that your child needs to rhyme. Enjoy rhyming with your child and building phonological skills! Keep reading. Visit the links below for more great all about me ideas from the #teachECE team! Spell Your Name Sensory Bin via Mom Inspired Life All About Me Early Writing Activity via The Educators’ Spin On It All About Me Booklet via Tiny Tots Adventures Learning Names in Preschool with ALL 5 Senses! via The Preschool Toolbox Scratch and Sniff Names via Fun-A-Day Fun Kindergarten Math Activities Using Their Names via Capri + 3 All About Me Math Race via Still Playing School DIY My Name Puzzle Printable Template via Learning 2 Walk Name Recognition Snack via Munchkins and Moms Build my name via Rainy Day Mum All About Me DIY Puzzles for Preschoolers via Life Over C’s
<urn:uuid:3d766e09-ea26-45aa-9411-eebc937355de>
CC-MAIN-2016-26
http://growingbookbybook.com/erase-me-rhyming-activity/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898674
701
3.34375
3
Delayed arrays are represented as functions from the index to element value. Every time you index into a delayed array the element at that position is recomputed. |Repr D a| Compute elements of a delayed array. |(Fillable r2 e, Elt e) => FillRange D r2 DIM2 e| Compute a range of elements in a rank-2 array. |(Fillable r2 e, Shape sh) => Fill D r2 sh e| Compute all elements in an array. |Combine D a D b| |Combine B Word8 D b| |Storable a => Combine F a D b| |Unbox a => Combine U a D b| Arrays with a representation tag, shape, and element type. Use one of the type tags like U and so on for DIM2 ... for O(1). Produce the extent of an array, and a function to retrieve an arbitrary element.
<urn:uuid:742b2aba-e711-40d9-9fae-b8a920d7856f>
CC-MAIN-2016-26
http://hackage.haskell.org/package/repa-3.1.3.1/docs/Data-Array-Repa-Repr-Delayed.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.69425
215
2.84375
3
GLOBAL PRESSURES AND THE FLEXIBLE RESPONSE WALTER G. HERMES AMERICAN MILITARY HISTORY ARMY HISTORICAL SERIES OFFICE OF THE CHIEF OF MILITARY HISTORY UNITED STATES ARMY Global Pressures and the Flexible Response When President John F. Kennedy assumed office in the opening days of 1961, the prospects for peace were not encouraging. Premier Khruschev had been cool since an American U2 plane gathering intelligence had been shot down over Russia in the spring of 1960. Although the possibility of a general nuclear war had receded, Soviet support of wars of national liberation had increased. Despite the unfavorable signs President Kennedy was quite willing to renew the quest for peace. As he pointed out in his budget message of March 1961, the United States would make "efforts to explore all possibilities and to take every step to lessen tensions, to attain peaceful solutions, and to secure arms limitation." To Mr. Kennedy, diplomacy and defense were not distinct alternatives, but complemented each other. Yet the President, well aware that the search for peace might be long, determined to give the United States a more flexible defense posture that would enable the nation to back its diplomacy with appropriate military action. The country should be strong enough to survive and retaliate after an enemy attack, on the one hand, and be able to prevent the erosion of the free world through limited war, on the other, the President informed Congress. "Any potential aggressor contemplating an attack on any part of the free world with any kind of weapons, conventional or nuclear, must know that our response will be suitable, selective, swift, and effective." The Changing Face of the Cold War As President Kennedy ushered in the era of flexible response, massive retaliation was officially de-emphasized. Long overdue, the shift in military policy stressed the need for ready nonnuclear forces as a deterrent to limited war. Besides, other changes under way indicated that America would need a more flexible position of strength in the days ahead. By 1961 the tight bipolar system that had arisen after World War II and under which the United States and the Soviet Union were the only truly great powers was on the wane. No longer could the Soviet Union claim to speak for all Communists without challenge. In eastern Europe the satellite nations were pressing for more freedom of action and were eager to increase trade with the West. The Chinese Communists, on the other hand, were becoming impatient with Soviet conservatism and strongly opposed peaceful coexistence. And, in the New World, Fidel Castro was pursuing his own program of intrigue with subversion in Latin America. The Communist bloc, therefore, was in the process of splintering, with groups favoring the Soviet, Chinese, or Cuban brand of Communism emerging in many countries. Dissent was also mounting in the West. With the success of the Marshall plan and the return of economic prosperity to western Europe during the fifties, France, West Germany, and other nations became creditor countries and were less and less dependent for the maintenance of their economies upon the United States. The return of Charles de Gaulle to power in France under a new and strong executive type of government in 1958 produced growing dissidence within the North Atlantic Treaty Organization as de Gaulle sought to recapture some of France's former glory by taking an increasingly independent role. Outside Soviet and American circles the presence of a third force began to make itself felt during the fifties and early sixties. Most of the former colonial possessions in the Middle East, Asia, and Africa were granted independence in the fifteen years following World War II, and although many of them retained economic and cultural ties with the mother country they were generally reluctant to become politically involved in the East-West struggle. Since the new nonaligned nations contained about one-third of the world's population and controlled much of the earth's oil and other resources, they were courted by both sides. Many suffered, however, from basic political instability and economic weakness that made them fertile fields, first for Communist propaganda and subversion and then for wars of national liberation. In May 1961 President Kennedy, touching on such revolutionary wars, pointed out to Congress that the great battleground of the sixties would be "the lands of the rising peoples." As the revolts to end injustice, tyranny, and exploitation broke out, he noted, the Communists had sent in arms, agitators, technicians, and propaganda to capture the rebel movements. Working behind the scenes with terrorists and saboteurs, the Communists had extended their control over large areas since World War II and had even broader ambitions. In such conflicts, the President had affirmed, "It is a contest of will and purpose as well as force and violence- a battle for minds and souls as well as lives and territory." The United States could not, Kennedy concluded, stand aside and passively let this fight be won by the Communists. With half the world still in the balance, the prospects for peaceful coexistence between the United States and the Soviet Union dimmed in 1961. Insurgent movements were already disrupting Laos, Vietnam, the Congo, and Algeria and the threat of revolutionary outbreaks hung over several other countries in South America, Africa, and Asia. In most instances the Communists were abetting the insurgents and the United States was providing aid to the government forces. It was ironic, under the circumstances, that President Kennedy's first brush with the Communists should result from American support of an insurgent group. Cuba and Berlin During the closing days of the Eisenhower administration the United States had severed diplomatic relations with Cuba, but the presence of a Communist satellite almost within sight of the mainland remained a constant source of irritation. In April 1961 a band of Cuban exiles launched a poorly conceived invasion of the island at the Bay of Pigs with limited air and no naval support. When the people failed to rise and join the invaders, the operation collapsed and most of the invading force was taken prisoner. Since the United States had sponsored the exiles, their utter failure damaged American prestige and enhanced Fidel Castro's stature. The invasion also brought offers of Soviet help from Premier Khruschev and dark hints that he was ready to employ Russian missile power to aid Castro. The timing of the Cuban fiasco was particularly unfortunate, since President Kennedy was scheduled to hold a summit meeting with Khruschev in Vienna in early June on the delicate subject of Berlin. In that divided and isolated city the growing prosperity of West Berlin contrasted sharply with the poverty and drabness of the Soviet sector and West Berlin had become as great an irritation to the Communists as Cuba was to the United States. In 1958 Khruschev had demanded that Berlin be made a free city and threatened that unless western troops were withdrawn in six months he would conclude a separate treaty with East Germany. Although Khrus- chev later backed off from this threat and even showed signs of a conciliatory attitude on Berlin, at the Vienna meeting he made a complete about-face. Once again he informed Kennedy that unless the West accepted the Soviet position he would take unilateral action to solve the Berlin impasse. If the Soviet premier hoped to intimidate the new President in the wake of the Cuban setback, he was unsuccessful. Instead, Mr. Kennedy in July requested and received additional defense funds from Congress as well as authority to call up to 250,000 members of the Ready Reserve to active duty. The President refrained from declaring a national emergency, which would have permitted him to bring up to a million Reserve members back to federal service; he did not wish to panic either the American public or the Soviet Union by a huge mobilization. On the other hand, he determined to strengthen the conventional armed forces in the event that Soviet pressure at Berlin demanded a gradual commitment of American military power. During August tension heightened as thousands of refugees crossed from East to West Berlin, and the Communists took the drastic step of constructing a high wall about their sector to block further losses. As the situation grew worse, the President decided in September to increase American troops in Europe and to call up some Reserve personnel and units to strengthen the continental U.S. forces. By October almost 120,000 reserve troops, including two National Guard divisions, had been added to the active Army, and the Regular troop strength had been increased by more than 80,000. The partial American mobilization and the quick reinforcement of Europe with U.S. ground, air, and naval units were accompanied by strong efforts to bring the military personnel and equipment remaining in the United States to a high degree of readiness. Accelerated training and production prepared and outfitted the new and the Reserve troops as quickly as possible. In the event that an emergency should arise in Europe, extra equipment was shipped there to be held in storage for units that might have to be air-transported from the United States in a hurry. As the Soviet Union became aware that the Berlin challenge would be met swiftly and firmly, it began to ease the pressure again. The new administration had passed its first test. By mid-1962 the Reserve forces called up were returned to civil life, but the Regular increases were retained. The next Soviet ploy was less direct but more dangerous than the Berlin threat. After the Bay of Pigs invasion, the Soviet Union had dispatched military advisers and equipment to the Castro forces in Cuba, ostensibly to help them repel any future attacks. In the summer of 1962, however, rumors that the Soviet assistance might include offensive weapons such as the medium-range bombers, and possibly medium-range ballistic missiles as well, increased. Not until mid-October could the conclusive photographic proof of the presence of the missiles in Cuba be obtained. With the pictures in hand, Mr. Kennedy quickly took steps aimed at getting these offensive weapons removed from the island. The President made it clear that the United States would retaliate against the Soviet Union with nuclear weapons if any Cuban missiles were used against an American nation. The Strategic Air Command's heavy bombers went on a 15-minute alert status with some aircraft aloft at all times. To buttress the defense of the southeastern states closest to Cuba, fighter-interceptor squadrons and Hawk and Nike missile battalions moved in to supplement the local air defense forces. At sea Polaris-equipped submarines left for preassigned stations in case the Soviet Union decided to use the occasion for a nuclear showdown. On October 22 President Kennedy announced that he would seek the endorsement of the Organization of American States (OAS) for a quarantine on all offensive military equipment being shipped to Cuba, tighten surveillance of the island, and reinforce the U.S. naval base at Guantanamo. With OAS approval, the quarantine went into effect two days later. Meanwhile dependents had been removed from Guantanamo and marines had been lifted by air and sea to defend the base. The Army began to move over 30,000 troops, including the 1st Armored Division, and over 100,000 tons of equipment into the southeastern states to meet the emergency. As the Navy's Second Fleet started to enforce the quarantine on October 25, hundreds of Air Force and Navy planes conducted surveillance missions over the Atlantic and Caribbean to locate and track ships that might be carrying offensive weapons to Cuba. The continued activity at missile construction sites in Cuba had now placed the world on the brink of its first nuclear war. The possibility of a clash between American warships and Soviet merchantmen carrying offensive war items to Cuba added to existing tensions. As the crisis mounted in intensity, however, the Soviet Union ordered such ships to return home and no incidents occurred. In the meantime a dramatic series of messages passed between Mr. Kennedy and Premier Khruschev, and the Soviet Union finally agreed on the 28th to dismantle and remove the offensive weapons from Cuba. During the next three weeks the sites were gradually evacuated and the missile systems and technicians were loaded on Soviet ships. Negotiations for the removal of the Russian bombers were completed in November and they were shipped out of Cuba in early December. Although the quarantine ended on November 20, many air units remained on duty stations to continue surveillance missions in the area to insure that the sites remained inactive. Army forces deployed during the crisis did not return to their home bases until shortly before Christmas. For the second time in two years the Soviet Union had demonstrated that it was unwilling to risk nuclear war. With the ending of the Cuban missile crisis, Soviet attempts to challenge the United States directly began to subside and Soviet interest centered increasingly upon support of so-called wars of liberation. For the United States, Berlin and Cuba marked the beginning of the flexible response era, as the American reaction ranged from limited and conventional measures to the threat of general war. Detente in Europe The aftermath of Berlin and Cuba produced several unexpected develop meets. Evidently convinced that further testing of the American leadership might be unwise, the Soviet Union adopted a more conciliatory attitude in its propaganda and indicated that at long last it might be willing to conclude a nuclear test ban treaty. In view of the long history of fruitless negotiation over nuclear controls, the Soviet move was a promising breakthrough. Under the provisions of the accord ratified in the fall of 1963, the Soviet Union, the United Kingdom, and the United States agreed not to conduct nuclear explosions in space, in the atmosphere, or underwater; underground explosions were permissible as long as no radioactive material reached the surface. Although the treaty was weakened by the failure of France to ratify or adhere to it, it marked the first major agreement between the Soviet Union and the United States since the Austrian peace treaty of 1955. The explanation for Soviet co-operation with the West in the sixties, however limited, may have resulted partly from the growing independence of Communist China. The Chinese had never embraced the concept of peaceful coexistence with the capitalist countries and their criticism of what their leaders claimed was too soft a line by Moscow to the West had mounted. As the Sino-Soviet split widened, the Soviet Union adopted a less threatening role in Europe, and the shift had far-reaching effects upon the carefully built-up alliance system designed to guard western Europe against Soviet aggression. The defense system of the North Atlantic Treaty Organization had been constructed around the American strategic deterrent, but the credibility of the U.S. determination to defend western Europe in the face of a growing Soviet nuclear power that might devastate the United States itself had come into serious question. Under President Kennedy and, after his assassination in November 1963, under President Lyndon B. Johnson, the United States made several efforts to reassure its NATO allies of American good faith. The reinforcement of U.S. conventional forces in Europe at the time of the Berlin crisis provided NATO with more options in responding to Communist pressure. In 1963 the United States assigned three Polaris submarines to the U.S. European Command and suggested that a multilateral naval force be established but the idea was dropped in 1965. By that time it was becoming clear that General de Gaulle intended to disengage France militarily from NATO. The French cut the ties gradually by participating less and less in NATO exercises, while the French nuclear strike force slowly expanded. In early 1966 de Gaulle served notice that all NATO troops in the country would have to depart. All remaining French forces would be relieved from NATO command during 1966, de Gaulle stated, but France would not quit the alliance. The French president maintained that conditions in Europe had changed since 1949 and the threat to the West from the Soviet Union had lessened. The military disassociation of France from NATO was unfortunate, since the chief headquarters and many of elaborate lines of communications supporting the forward military forces were in France. When representations to the French proved fruitless, however, the exodus of NATO troops got under way in mid-1966. Supplies and equipment were relocated at bases in the United Kingdom, Belgium, Germany, and the Netherlands. In early 1967 Supreme Headquarters, Allied Powers Europe (SHAPE), moved to Belgium, the U.S. European Command shifted its headquarters to Germany, and the Allied Command Central Europe as well as the ground command was transferred to the Netherlands. Changes within the alliance had been slow. Despite the fact that the concept of massive retaliation had been discredited except as a last resort, it was not until 1967 that a strategy of flexible response was officially adopted. Apparently the Soviet Union also had decided that nuclear warfare offered only the bleak prospect of mutual destruction. In late 1969 the Soviet Union joined the United States in a series of Strategic Arms Limitation Talks in Finland to explore ways and means of stopping the nuclear arms race and of beginning seriously the task of disarmament. Progress was slow because of the many technical points that had to be settled, but at least a start was made. In the meantime, however, the United States proceeded with its plans to deploy its ballistic missile defense system, using the NIKE-X program, with the long-range Spartan and the short-range Sprint missiles, as its base. The Safeguard System, as it eventually came to be known, envisioned a phased installation of the missiles, radars, and computers at key sites across the country by the mid-seventies. Although the Safeguard System was limited and was regarded as a thin line of defense, President Nixon was reluctant to halt site development and construction of the missile complexes until an agreement was reached in the Strategic Arms Limitation Talks. Thus, despite considerable Congressional and public opposition, work on the first two bases in Montana and North Dakota was initiated in 1970. The Growing Commitment in Underdeveloped Areas The American policy of containment met its most serious challenge in southeast Asia as the Communist revolutionary wars to take over Laos and South Vietnam picked up momentum in the early sixties. Taking advantage of the political instability in these countries, the Communists had built up their political and military organizations and gradually brought large segments of the rural areas under their control. Efforts by the governments to regain these areas through military operations had been largely unsuccessful despite the presence of American advisers and the provision of military equipment and supplies. The struggle to keep Laos and South Vietnam out of the Communist camp, American diplomats and advisers soon discovered, was complex. After decades under French rule, many Indochinese leaders were willing to accept American assistance but unenthusiastic about instituting political and economic reforms that might lessen their newly won power. The situation in Laos mirrored the American frustration. Until 1961 the United States supported the pro-West military leaders with aid and advice, but the efforts of these leaders to unify the country by force had failed and three different factions controlled segments of the country. As conditions steadily worsened, President Kennedy decided to recognize a coalition government in a neutral Laos. Fourteen nations signed a declaration in July 1962 confirming the independence and neutrality of Laos, which pledged itself to enter into no military alliance as well as to clear all foreign troops from the country. While the future of Laos remained clouded, a coalition government was preferable to a Communist takeover. By the end of 1962 over 600 American advisers and technicians stationed in Laos had left the country. The concern over Communist activity in Laos and Vietnam also involved Thailand in mid-1962. To deter Communist expansion and to protect the territorial integrity of Thailand under its obligations to the Southeast Asia Treaty Organization the United States set up a joint task force at the request of the Thai government. As Communist troops maneuvered not far from the Thai border, a reinforced battalion of marines was quickly transported to Thailand and was followed by a battle group of the 25th Infantry Division. Army signal, engineer, transportation, and other service troops moved in to support the U.S. combat forces and to provide training and advice for Thai units. The quick response so strengthened the Thai government's position that the Communist threat abated during the remainder of the year, enabling first the Marine and then the Army troops to withdraw. Many of the American service support forces, however, remained to assist in Thai training and logistical support programs. As the war in Vietnam intensified in the mid-sixties, the roads, airfields, depots, and communications constructed and maintained by U.S. forces in Thailand became extremely valuable in supporting the American effort in Vietnam. Trouble in the Caribbean Although Europe and Asia remained the critical areas in the policy of Communist containment, American interest in Caribbean developments increased sharply after Cuba's defection from the West. When a military revolt in April 1965 to oust the civilian junta in the Dominican Republic was followed by a military counterrevolution, the United States monitored the situation closely. As both factions sought to gain control of the government machinery, the capital city of Santo Domingo became a bloody battleground and all semblance of law and order vanished. Concern over the immediate threat to American lives rose as diplomatic efforts to restore peace failed. First to provide protection for U.S. nationals and subsequently to insure that the Communists did not get another foothold in the Caribbean, President Johnson sent Marine Corps and then Army airborne troops to Santo Domingo to stabilize the situation. Less than seventy-two hours after alert, two battalions of the 82d Airborne Division from Fort Bragg, North Carolina, air-landed at a field east of Santo Domingo and fanned out toward the city. They were soon reinforced by four additional airborne battalions with support units. In the meantime, Marine troops consolidated their hold on the western portion of Santo Domingo. Since the forces of the rebels, the so-called Constitutionalists, were concentrated in the southern part of town, Lt. Gen. Bruce Palmer, Jr., the American ground force commander, carried out a night operation to link up the Army and Marine units and to separate the warring factions. Using three airborne battalions for the action, Palmer had the first move into the easternmost sector, then passed the other two through the first to secure a corridor. With surprising ease and speed, the 82d Airborne's troops crossed the city and joined with the marines, thus creating a buffer zone between the two fighting forces. By the end of the first week in May all nine battalions of the 82d Airborne Division and four battalions of marines were in the Dominican Republic. With supporting forces the total number of American troops soon reached a peak of about 23,000. They patrolled the streets of Santo Domingo, maintained law and order, and distributed food, water, and medical supplies to both sides. The quick landings in force and the establishment of the buffer zone made further fighting on a large scale impossible. With stalemate the alternative, the adversaries began a series of negotiations that lasted until September. The U.S. intervention in the Dominican Republic became the subject of spirited discussion in the United States and abroad. Despite unfavorable public reaction to the intervention in some Latin American countries, the Organization of American States did ask its members to send troops to the Dominican Republic to help restore order. Six members—Brazil, Costa Rico, El Salvador, Honduras, Nicaragua, and Paraguay eventually dispatched forces and joined the United States in forming the first inter-American force ever established in the Western Hemisphere. Although American troops constituted the largest contingent of the force, Lt. Gen. Hugo Panasco Alvim of Brazil was named commander in May and General Palmer became his deputy to emphasize the international composition of the force. Some U.S. troop withdrawals began almost immediately after the Latin American units arrived. The acceptance of a provisional government by both sides in early September relieved much of the tension in the Dominican Republic, and by the end of 1965 all but three battalions of the 82d had returned to the United States. After elections in mid-1966, the last U.S. and Latin American elements pulled out in September, ending the 16-month intervention. Although the legality and the unilateral nature of the U.S. action have been challenged, there is little doubt but that the intervention saved lives and restored law and order in the Dominican Republic. Civil Rights and Civil Disturbances Within the United States itself, meanwhile, racial tensions growing out of the civil rights movement had dictated the use of troops in civil disturbances on a scale reminiscent of the labor troubles of the late nineteenth cen- tury. The first and most dramatic use of federal troops came in September 1957 when President Eisenhower dispatched a battle group of the 101st Airborne Division to Little Rock and federalized the entire Arkansas National Guard to enforce a court order permitting nine Negro students to attend Central High School. The paratroopers successfully dispersed the mob that had gathered at the school and stabilized the situation. Some weeks later they turned over the task of protecting the Negro students to the Arkansas National Guard, which kept about 400 members on federal duty at Little Rock until the end of the school year. The incident marked the first time a President had exercised his power to call the militia into federal service to control a domestic disturbance since 1867, and it was one of the few times in American history that a Chief Executive used either Regular troops or the National Guard in the face of opposition from a state's governor. Other instances of the same sort followed in the administrations of Presidents Kennedy and Johnson. When, in September 1962, the governor of Mississippi attempted to block the court-ordered registration of James H. Meredith, a Negro, at the University of Mississippi at Oxford, President Kennedy first sought to enforce the law by using federal marshals. When riots broke out on the campus during the night before the registration of Meredith and the marshals were unable to control the mob, President Kennedy federalized the Mississippi National Guard and ordered active Army troops, some already standing by at Memphis, Tennessee, to Oxford. Eventually some 20,000 active Army troops and 10,000 federalized Guardsmen were deployed during the crisis, with 12,000 men in the immediate area and the remainder standing by at other stations. With the military forces in firm control the tension rapidly subsided and the number of troops was scaled down, although as at Little Rock some federal protection had to be provided throughout the school year. Bombings and other racially motivated incidents in Birmingham, Alabama, in May 1963 forced President Kennedy to send Regular troops to Alabama bases. Later that year an integration crisis, first at the University of Alabama and then in the public schools of several Alabama cities, led him to federalize the entire Alabama Guard, although he used only part of it. In 1965 President Johnson employed both Regulars and Guardsmen to protect civil rights marchers along the route from Selma to Montgomery. A riot in Rochester, New York, in 1964, and a far more serious one in the Watts district of Los Angeles in 1965, with killing, looting, and burning on a large scale, drew attention to the fact that the problem was not confined to the South. Restoration of order in the city required the efforts of 13,400 California Guardsmen as well as city and state law enforcement officers. Racial disturbances continued to occur during the next two years, with particularly serious outbreaks in 1966 in Chicago, Cleveland, Cicero, Illinois, and San Francisco. They increased sharply in 1967 when more than fifty cities reported disorders during the first nine months of the year. These ranged from minor disturbances to the extremely serious disorders of Newark and Detroit, both of which occurred in July. Aside from state and local law enforcement officers, only the National Guard in its state capacity was used in the Newark riot, whereas in the destructive Detroit outbreak the governor of Michigan not only used the National Guard but also requested and, after some delay, obtained federal troops. This was the first time since the Detroit riot of 1943, when the Michigan National Guard was overseas, that a governor had requested federal assistance to put down a civil disturbance. During the Detroit riot of 1967 the task force commander had over 10,000 Guardsmen and 5,000 Regulars under his command and deployed nearly 10,000 men before the crisis passed. Since disorders were occurring with greater frequency, President Johnson on July 28, 1967 appointed a National Advisory Commission on Civil Disorders for the purpose of investigating the causes and possible cures, with Governor Otto Kerner of Illinois as its chairman. The Kerner Commission, as it came to be known, concluded in its report early in 1968 that "our Nation is moving toward two societies, one black and one white—separate and unequal." The events recently experienced called attention to the racial imbalance in the National Guard and led to more training for both the Regular Army and the Guard as well as to more sophisticated planning by the Army in preparation for possible future disturbances. The assassination of Dr. Martin Luther King, Jr., in Memphis, Tennessee, on April 4, 1968 produced a wave of rioting, looting, and burning in cities across the country, and the National Guard was used by the states in many places to subdue the rioters. Federal intervention was required in the troubled cities of Washington, Chicago, and Baltimore where the government used both federalized Guardsmen and Regular troops, deploying over 40,000 men in three cities alone. Only a portion of the forces was committed to control rioting. On April 22, 1968 in the wake of the riots, the Army established a new agency in the Office of the Chief of Staff, the Directorate for Civil Disturbance Planning and Operations. Designed to provide command facilities for the Army's role as the agent of the Department of Defense in civil disturbance matters, the agency became the Directorate of Military Support on September 1, 1970. Although the years immediately following 1968 produced no great racial disturbance, they did see the continuation of a series of large and small antiwar demonstrations in which federal and National Guard troops were employed. In October 1967 a large demonstration against the war took place at the Pentagon. The government had assembled protective forces that included 236 marshals, some of whom had been at Oxford, Mississippi, in 1962, and a military force which, including troops actually operational at the Pentagon and those in reserve, totaled around 10,000 men. Massive antiwar rallies were staged in Washington in November 1969 and May 1970, but these were generally peaceful; federal troops were positioned in the capital area but not used. Student protests against the Cambodian operations of 1970, however, led to tragedy at Kent State University in Ohio. National Guardsmen, under provocation from some students, fired upon the demonstrators, killing four, including two women, and wounding a dozen others. The Kent State incident and another at Jackson State College in Mississippi involving students and police led to the appointment of the President's Commission on Campus Unrest, headed by former governor of Pennsylvania, William W. Scranton. The Scranton Commission found that campus unrest reflected the crisis over the war and related matters that gripped the nation. An extended antiwar protest in the nation's capital took place in April and May 1971. A peaceful and impressive demonstration by Vietnam veterans was followed by an attempt on the part of youthful demonstrators on May Day to tie up Washington traffic and prevent government workers from reaching their jobs. The government deployed about 2,000 National Guardsmen, who were sworn in as special policemen, 3,000 marines, and 8,600 troops of the Regular Army. The police and troops guarded highways and bridges and kept the traffic moving despite minor efforts by the demonstrators to carry out their plans. Secretary McNamara the New Management System The operational activities of the armed forces during the Kennedy and Johnson administrations reflected but one portion of the wide range of problems confronting the nation's civilian and military leaders. At the same time, changes of far-reaching importance were being carried out in less publicized areas. The primacy of the heavy manned bomber as the nation's main instrument of nuclear deterrence had come into question after the Korean War and finally ended in the sixties. President Kennedy and his Secretary of Defense, Robert S. McNamara, followed the trend of the Eisenhower period and missiles gradually replaced some of the strategic bombers. For fixed wing aircraft, the sixties saw the end of a cycle that began in Korea and the restoration of their support role. For the ground forces, Vietnam brought about a reaffirmation that in conventional and limited wars the ground units bear the brunt of battle. The steady decline of the Army during the Eisenhower years was dramatically reversed in 1961 as the Army grew in numbers and its portion of the defense budget increased. Within the Department of Defense, exercise of the more extensive authority granted the Secretary of Defense by the reorganization act of 1958 had begun. When Mr. McNamara became secretary in 1961, he accelerated the process. President Kennedy gave McNamara two instructions: develop the force structure necessary to meet American military requirements, without regard to arbitrary or predetermined budget ceilings, and procure and operate the necessary force at the lowest possible cost. In accordance with McNamara's concept of centralized planning the Joint Chiefs of Staff, assisted by the services, continued to establish the military plans and force requirements deemed necessary to support U.S. national security policies. The forces, how- ever, were now separated according to function, such as strategic retaliation, general purpose, and reserve, and placed in what was called a program package. When McNamara received these packages, he considered whether each one contained balanced forces and resources to accomplish its function, correlated the costs and the effectiveness of the weapon systems involved, and then forwarded the approved packages in the annual budget for Presidential and Congressional action. To provide assistance in long-range planning, a five-year projection of all forces, weapon systems, and defense activities, together with the cost, was also drawn up yearly for McNamara's endorsement. Initially the Kennedy administration had three basic defense goals: to strengthen the strategic retaliatory forces; to build up the conventional forces so that a flexible response could be made to lesser challenges; and to improve the over-all effectiveness and efficiency of the defense effort. To attain the first objective, intercontinental ballistic missiles in hardened sites and Polaris nuclear-equipped submarines were added to provide the United States with the capability of retaliating in force in the event of a Soviet nuclear attack. The second goal gained quick impetus from the Berlin crisis as Army strength alone rose from 860,000 to over 1,060,000 in 1961 and the Navy and the Air Force conventional forces also made modest gains. Although the National Guard units called up for the crisis were released in mid-1962, the Army was authorized in the meantime to activate two Regular divisions, for a total of sixteen, and to retain a permanent strength of 970,000 men. The influx of men allowed many units to remedy understrengths, and the additional funds allocated during the build-up permitted the procurement of new equipment and weapons to modernize the Army. As the Army budget rose from Slo.1 billion to $12.4 billion in fiscal years 1961 and 1962, almost half the increase was allocated to the purchase of new vehicles, aircraft, missiles, and other equipment. Seeking greater efficiency and reduced costs for the defense effort, Secretary McNamara instituted changes in organization and procedures utilizing the latest management techniques and computer systems. He established firm control over the services through close budget supervision, and gradually centralized under the Office of the Secretary of Defense many activities formerly administered separately by one of the services. Since a great number of supply items and related services were in common use throughout the Defense Department, he established the Defense Supply Agency in 1961. The agency assumed control of the five old and three new commodity single managerships, and of the Defense Traffic Management Service, as well as functions relating to cataloging and standardization. To the Defense Supply Agency fell the management, purchase, and distribution of items such as food, petroleum products, and medical, automotive, and construction supplies at the wholesale level. Centralization permitted mass buying at competitive prices, the establishment of tighter inventory controls through the use of computers, the standardization of items to eliminate duplication, and the consolidation of supply installations. Tied in closely with the objectives in setting up the Defense Supply Agency was the launching of a five-year defense cost reduction program in 1962. Designed to cut procurement and logistics costs throughout the Defense Department, the program had three main goals: to buy only what was needed, with no frills; to purchase at the lowest sound price after competitive bidding whenever possible; and to decrease operating costs. Centralization rather than cost reduction was the prime aim in setting up the Defense Intelligence Agency in 1961. Mr. McNamara had directed that all defense intelligence operations should be co-ordinated at a higher level and that one office should prepare his intelligence estimates. The effects of the 1958 reorganization were most noticeable in the decision-making process. By maintaining close watch over such matters as budget and finance, manpower, logistics, and research and engineering, the Secretary of Defense tightened civilian control over the services and carried unification much further than any of his predecessors. One of the moves designed to improve unified action was Secretary McNamara's creation of the U.S. Strike Command in 1961. By combining the Army's Strategic Army Corps with the Air Force's Tactical Air Command, the new command had combat-ready ground and air support forces that could be deployed quickly to meet contingencies or to reinforce overseas units. The Army and Air Force components of Strike Command remained under the control of their own services until an emergency arose, then passed to the operational control of Strike Command. In view of the changes in organization and procedures at the Defense Department level, it was not surprising that the Secretary of Defense should also direct a thorough review of the Army's organization in 1961. A broad reorganization plan, approved by the President in early 1962 under the authority of the Reorganization Act of 1958, called for major shifts in the tasks performed by the Department of the Army Staff and the technical services. The Army Staff became primarily responsible for planning and policy, leaving the execution of decisions to the field commands. In an effort to organize the Army along more functional lines, centralize such matters as personnel, training, and research and development activities, and integrate supply operations, most of the technical services were abolished. The statutory offices of the Chief Chemical Officer, the Chief of Ordnance, and The Quartermaster General were completely eliminated. The Chief Signal Officer and the Chief of Transportation continued to perform their duties as special staff officers rather than as chiefs of services. Chief Signal Officer later regained a place on the General Staff when he became Assistant Chief of Staff for Communications-Electronics in 1967, but the Chief of Transportation's activities were absorbed by the Deputy Chief of Staff for Logistics in 1964. The Chief of Engineers retained his special status only with respect to civil functions; his military functions were placed under the general supervision of the Deputy Chief of Staff for Logistics until 1969, when he was again accorded independent status. Among the technical services, only The Surgeon General emerged with his position intact after the reorganization. In the administrative services, The Adjutant General and the Chief of Finance also lost their statutory status and became special staff officers; later, in 1967, the Office of the Chief of Finance was discontinued as a special staff agency and its functions were transferred to the Office of the Comptroller of the Army. A new Office of Personnel Operations was established on the special staff level to provide central control for the career development and assignment of all military personnel. Officers of the technical and administrative services retained their branch designations but the management of their careers, with certain exceptions, was taken over by the Office of Personnel Operations. Although many of the most important Quartermaster functions were given to the Defense Supply Agency, a new Chief of Support Services assumed responsibility for such matters as graves registration and burials, commissaries, and clothing and laundry facilities. Most of the operating functions lost by the Army Staff and the technical services were allocated to the U.S. Continental Army Command and to two new commands—the U.S. Army Materiel Command (USAMC) and the U.S. Army Combat Developments Command (USACDC). Continental Army Command became responsible for almost all of the Army schools and for the training of all individuals and units in the United States, but lost its test and evaluation mission to Army Materiel Command and turned over combat development activities to Combat Developments Command. The Army Materiel Command took over many of the tasks formerly assigned to the technical services and set up subcommands to handle them. It assumed operating responsibility for research, development, testing, production, procurement, storage, maintenance, and distribution of materiel on a wholesale basis. To the Combat Developments Command went the mission of developing organizational and operational doctrine, materiel objectives and qualitative requirements, war games and field experimentation, and cost effectiveness studies. This command was to provide answers to questions on how the Army was to be organized and equipped and how it was to fight in the field. The transfer of functions began in the spring of 1962 and the new commands became operational in the summer. During the following year other major changes affecting staff responsibilities took place. In January 1963 the Office Reserve Components was established to exercise general supervision over all plans, policies, and programs concerning the National Guard and Reserve forces. The statutory responsibility of the Chief, National Guard Bureau, to advise the Chief of Staff on National Guard affairs and to serve as the channel of communications between the Army and the states adjutants general was not altered by the creation of the new agency. The Chief, Army Reserve, however, did lose his control of the Reserve officers training program, which was transferred to the Office of Reserve Components in February and later to the Deputy Chief of Staff for Personnel in 1966. Since the Deputy Chief of Staff for Military Operations (DCSOPS) had become heavily involved in planning for joint operations, the Army in the spring of 1963 created an Assistant Chief of Staff for Force Development to assure adequate attention to affairs that primarily concerned the Army. To prepare the Army force plans and structures in consonance with requirements developed by DCSOPS and with manpower and budget limitations as well became the main task of the new office; DCSOPS remained the principal adviser to the Chief of Staff on all joint matters and also retained responsibility for strategic planning and the employment of combat-ready Army troops. As it turned out, neither the new Assistant Chief of Staff for Force Development nor the Army Comptroller had sufficient authority to manage the Army's resources or to integrate the proliferating automatic data processing systems. Gradually the responsibility for co-ordinating these shifted to the secretariat of the General Staff, which became almost a "superstaff." To provide for centralized direction and control of resource management programs, including management information systems, force planning, and weapons system analysis, the Army in February 1967 established the Office of the Assistant Vice Chief of Staff, to be headed by a lieutenant general. The new office under the Vice Chief of Staff would have authority to manage the various programs and the secretariat could return to its normal duties. Tactical Readjustment for Flexible Response The reorganization of the Army staff was accompanied by a major overhaul of the tactical organization. In practice the pentomic division had proved to be weak in staying power and needed more men to be capable of sustained combat. In 1961 the Army revised the divisional structure to provide a better balance between mobility and firepower and to insure greater flexibility. Under the Reorganization Objective Army Division (ROAD) concept the Army began in early 1962 to form four types of divisions—infantry, armor, airborne, and mechanized each with a common base and three brigade headquarters. The base contained a headquarters company, a military police company, a reconnaissance squadron, division artillery, and a battalion each of engineer, signal, medical, supply and transportation, and maintenance troops. In the combat mix of the ROAD division the Army attained flexibility, since the numbers and types of battalions could be varied at will to carry out different missions. An infantry division might ordinarily have eight infantry and two armor battalions with a total strength of 16,000 men, but could control up to fifteen battalions if the need arose. When terrain permitted, more armor or mechanized elements could be added; in the swamps or jungles, the accent could be placed upon the infantry battalions. The first ROAD divisions were the newly reactivated 1st Armored and 5th Infantry (Mechanized) Divisions, which were tested during l962. When the concept worked out well, the Army in 1963 began to convert the remaining fourteen active divisions and to reorganize the National Guard and Army Reserve divisions under ROAD. The active and Reserve reorganizations were completed in mid-1964. The search for mobility sparked another tactical innovation in 1962 when an Army board compared ground and air vehicles in terms of cost and efficiency.. The board recommended that new air combat and transport units be formed. The concept of an air assault division employing air-transportable weapons and aircraft-mounted rockets to replace artillery involved the delicate question of Air Force and Army missions, but Mr. McNamara decided to give it a thorough test. Organized in February 1963, the 11th Air Assault Division was successfully tested for two years. By the spring of 1965 the situation in Vietnam offered an opportunity to demonstrate its capabilities for mobility in rough terrain. In July the division was inactivated and the personnel and equipment used to reorganize the 1st Cavalry Division under the air mobile concept at Fort Benning, Georgia. The 2d Infantry Division took over the personnel and equipment left by the 1st Cavalry Division in Korea; and exchange of divisional colors and the repainting of divisional insignia accomplished the switch. The new airmobile division had an authorized strength of 15,787 men, 428 helicopters, and 1,600 road vehicles (half the number of an infantry division). Although the number of rifles and automatic weapons in the division was the same as in an infantry division, the supporting weapons were lighter. The direct support artillery was moved by helicopter and had no vehicular prime movers. Instead of a general support artillery battalion, the airmobile division used an aerial rocket artillery battalion. Since all equipment was designed to move by air, the division total weight was only 10,000 tons, less than a third of an infantry division's. The development of the air assault division provided fresh impetus to the dramatic growth of Army aviation. Although Army-Air Force agreements and decisions at Defense Department level during the fifties had generally been designed to restrict the size and weight of Army aircraft and their area of operations, the Army had pushed ahead vigorously in its research and development program. Concentrating heavily in the rotary-wing field, the Army by 1960 had built up its inventory to over 5,500 aircraft, almost half of them helicopters. The versatility of rotary-wing aircraft made them ideal for observation and reconnaissance, medical evacuation, and command and control missions. Under the service roles and missions agreement all of these activities were permissible for the Army when conducted in the battlefield area. But the Army expansion into the development of larger craft that could be used for transporting large loads of troops and supplies and the subsequent arming of helicopters raised questions concerning the proper role to be played by the two services. A reassessment of missions and roles in 1966 placed the larger transports that the Army had developed with the Air Force. Insofar as helicopters were concerned, however, the Army maintained primacy, partly because of the demonstrated ability of rotary-wing aircraft to support land combat operations and partly because the Army had been farsighted in its research and development effort. Although the Army inventory of fixed-wing aircraft slipped slightly in the ensuing years, the number of helicopters, spurred by the demands of the war in Vietnam, soared from about 2,700 in 1966 to over 9,500 by mid-1971. The Vietnam War also accelerated the development and introduction of many improved and new Army aircraft models. Among the new additions were the HueyCobra, a gunship armed with combinations of rockets, 7.62mm. miniguns, and machine guns, and the Cayuse observation helicopter. Later versions of the Mohawk fixed-wing observation plane, the Chinook medium transport helicopter, and the Iroquois (Huey UH-1) light transport helicopter, among others, incorporated technological advances and tactical adaptations that greatly improved their value in field operations. One of the new aircraft, the Cheyenne, the first helicopter designed as a weapons system to provide fire support to ground troops, experienced technical difficulties in 1969 and had to undergo further modification and testing. New weapon systems, vehicles, and equipment and new organizations to provide better support and greater flexibility for the fighting forces continued to emerge in bewildering rapidity during the sixties. The technological developments in air and ground vehicles promoted mobility, while the advances in communications facilitated the exercise of command and control and the gathering of intelligence. To handle the great number of men and to keep track of the countless items of supply, complex and efficient computer systems were put into operation. Along with sophistication of equipment came the training of qualified men to operate and maintain the machines and weapons and the development of an elaborate and responsive logistics system to provide the parts, fuel, and ammunition to keep them in action. The Army school system furnished the bulk of the basic technical training, although civilian manufacturers frequently supplied specialists to demonstrate to and instruct military units in the operation and maintenance of new products. Army schools had to keep abreast of the latest technological developments, therefore, and to turn out soldiers who would be able to use new items to the best advantage. The man behind the gun or machine became all the more important as the weapons and engines grew deadlier and more efficient. In the environment of the sixties professional skills had to be resharpened continually, but the expanding role of the soldier required other talents as well. In the underdeveloped areas of the world, battlefield proficiency was only part of the task. Military victories might gain real estate, but if they failed to win the subsequent support of the local population they were of little consequence. In counterinsurgency operations the important objective was to convince the people in the countryside of the central government's interest and concern for their safety and welfare and to earn their loyalty and confidence the only victory with any permanent meaning. Civic action and counterinsurgency operations were not new to the Army for they had played a dominant role in the opening of the American West and the pacification of the Philippines. During the occupation of Germany and Japan after World War II civic action programs had done much to improve the relations between the American military and the peoples of those countries; broad economic assistance and political and educational reorientation combined with a willingness to co-operate on the part of the German and Japanese people and their leaders, had simplified the problem of reconstituting civil authority. In underdeveloped countries the task was usually much more difficult, since communications were poor and the bonds between the central authority and the rural areas were seldom strong. Special forces, capable of operating independently and of reaching to the grass roots, were required to counter insurgency in such places. Although the Army had trained small units in psychological warfare, and counterinsurgency operations during the fifties, President Kennedy's personal interest in the field gave the program a significant boost in 1961. The Special Forces expanded sharply from 1,500 to 9,000 men in a year and continued to grow until 1969. Even more important, new emphasis in Army schools and camps provided all soldiers with basic instruction in counterinsurgency techniques. The Special Forces helped train local forces to fight guerrillas and taught them skills essential to strengthening the nation internally. Special Forces Groups were oriented toward specific geographic areas and given language training to facilitate their operations in the field. Each group was augmented with aviation, engineer, medical, civil affairs, intelligence, communications, psychological warfare, military police, or other elements that could be tailored for an assignment. Individual members of each team sent out could be trained also in other skills to increase their versatility. Working on a person-to-person basis, the Special Forces strove to improve the image of the government armed forces and to foster co-operative attitudes among the rural people. Special warfare training was also given to the Reserve forces to keep them current with counterinsurgency developments and the measures necessary to counteract internal aggression and subversion. One phase of this training—crowd and riot control tactics—became of particular importance because of the growing threat of civil disturbance. The Reserve Forces and the Draft Concerned over the expenditure of defense funds for Reserves that were long on numbers but short on readiness, Mr. McNamara ordered a thorough analysis of the status and functions of the Reserve forces during the early sixties. Maintaining a force of 400,000 National Guardsmen and 300,000 Army Reserves on a paid drill status, for instance, made little sense unless these backup forces could step in quickly in a crisis and replace the regular strategic reserve. The performance of the Army National Guard and Army Reserve units called up for the Berlin crisis in 1961 had demonstrated that the Army Reserve forces could not, with the level of support and training at that time, become ready for combat in less than four to nine months. In light of current military requirements, the time lag was considered excessive. In the spring of 1962 Mr. McNamara announced a plan that the Army had developed to reduce and realign the Army National Guard and to lower the paid drill strength of the Army Reserve. Considerable opposition from Congress and many state officials led him to defer action on the reduction, but he carried out the realignment the following year, eliminating in the process four National Guard and four Reserve divisions as well as hundreds of smaller units. At the close of 1964 the Secretary of Defense proposed a far more drastic reorganization of the Reserves to bring them into balance with contingency war plans. His contention was that the dual National Guard-Army Reserve management system was duplicative and that by consolidating units the paid drill strength could be trimmed from 700,000 to 550,000, and 15 National Guard and 6 Reserve divisions for which there were considered to be no military requirements would be eliminated under the secretary's proposal. The reorganization plan would place all units under the National Guard; only individuals would be carried in the U.S. Army Reserve. The storm of protest from Congress, the states, and the Reserve associations was quick and long-lived. While the debate went on, McNamara sought to achieve partial implementation of his reorganization goal by ordering the inactivation of Army Reserve units that were not required for contingency war plans. Despite strong Congressional opposition, the excess units, which included all 6 Army Reserve combat divisions and a total of 751 company and detachment size units, were eliminated by the end of 1965. In the fall of 1967, after concessions had been made by both Congress and the Department of Defense, a mutually acceptable reorganization plan that met Secretary McNamara's basic reorganization objectives was approved. Under the new structure, which was fully implemented by the end of May 1968 the Army Reserve retained organized units, but its paid drill strength was reduced from 300,000 to 260,000. Only three U.S. Army Reserve combat brigades were included; the remainder were training and support units. Army National Guard strength remained at slightly over 400,000 men, but the division total was lowered from 23 to 8, while the number of separate brigades was raised from 7 to 18. All units in the new force structure were to be manned at 93 percent or better of wartime strength and were to be fully supported with technicians, equipment, repair parts, and other essentials. To help obtain the men to fill the Reserve units, legislation had been passed in September 1963 revising the Reserve Forces Act of 1955. The new law provided for direct enlistment—an optional feature of the 1955 act—and the term of obligated service was reduced from eight to six years. The length of the initial tour became more flexible, generally ranging from four to seven months, depending upon the particular military skill involved. Under this Reserve enlistment program, recruits could be given longer periods of initial active duty to train them to fill the requirements for more highly skilled specialists. The ROTC program was revised in 1964 to improve the flow of qualified Reserve officers into both the active Army and the Reserve components. The four-year senior program at colleges and universities was strengthened by the addition of scholarship provisions, and a two-year program was added for students who had been unable to complete the first two years of ROTC and who had undergone at least six weeks of field training to qualify them for entrance into the advanced course (last two years). Congress also authorized the other military departments to establish a junior ROTC program at qualified public and private secondary schools, beginning in 1966. While most newly commissioned National Guard officers were products of state-operated officer candidate schools, ROTC from 1965 to 1970 continued to be the primary source of new officers for both the Regular Army and the Army Reserve. Cutbacks in active Army officer requirements for fiscal year 1971 indicated that a growing number of ROTC graduates would not be required to perform a two-year active duty stint, but would be released to the Army National Guard or the Army Reserve after three to six months of active duty for training. Recent reductions in the number enrolled in the ROTC program reflect the changeover of the ROTC basic course from required to elective status in many participating institutions, reduced draft pressure, prospects for an all-volunteer army, and antimilitary activities on the college campuses. Although the Army build-up for the war in Vietnam increased the pressures for a Reserve call-up to replace the Regular troops and draftees sent overseas, the Johnson administration decided in July 196S not to call up the Reserve forces to meet the Army's immediate needs for additional manpower. The President may have been influenced by dissatisfaction caused by the Berlin call-up, the restrictions usually set by the Congress on the length of the Reserve tour of active duty, and the desire to retain the Ready Reserves as an emergency force. To cover the void in the Army's ability to meet other contingencies created by utilization of active Army assets to supply initial Vietnam requirements and the cadres for newly formed units, a Selected Reserve Force for quick response was established in August 1965, using elements of 8 Army National Guard divisions and some backup units from the Army Reserve. The Selected Reserve Force contained over 150,000 men—about 119,000 National Guardsmen and 31,000 Army Reservists—and consisted of 3 divisions and 6 separate brigades with combat and service support units. All units were authorized to maintain 100 percent strength, received extra training, and were given priority in equipment allocation. To relieve Selected Reserve Force personnel of the burden of additionally prescribed training assemblies—100 annually as compared to the normal 48— the force was reorganized during 1968 and the additional training requirements were reduced. The force was abolished in September of 1969. By early 1968 the strain placed on active forces in meeting the continuing Vietnam build-up, keeping up other worldwide deployments, and maintaining a strategic reserve had become so great that these tasks could no longer be met through reliance upon increased draft calls. The urgency of the situation was underscored by Communist provocations in Korea and the enemy's Tet offensive in Vietnam. To alleviate the situation President Johnson directed the Secretary of Defense on April 11,1968 to mobilize units and individuals of the Ready Reserve for a period not to exceed twenty-four months. This smallest of the three partial mobilizations since the end of World War II brought into federal service 34 Army National Guard units and 42 Army Reserve units with a combined strength of 17,415. An additional 2,459 members of the Individual Ready Reserve the new designation for the Ready Reserve Mobilization Replacement Pool—were ordered to active duty as fillers for the activated units and to meet critical active Army shortages. Of the 76 units mobilized, 43 went to Vietnam and the remaining 33 were assigned to the Strategic Army Forces. As in earlier mobilizations, failure to attain peacetime training objectives and the shortages of equipment proved major problems that generally prevented the mobilized units from meeting postmobilization readiness objectives. But despite these shortcomings, the partial mobilization of 1968 proved to be the most successful to date, and forces were provided for both the refurbishment of the strategic reserve and Vietnam deployments much earlier than would have been possible if new units had been started from scratch. The last of the mobilized Reserve component units was returned to reserve status in December 1969. Three months later selected Army National Guard and Reserve units were once again ordered into federal service. On March 18, 1970 New York City mail carriers began an unauthorized work stoppage that threatened to halt essential mail services. President Nixon declared a national emergency on the 23d, thus paving the way for a partial mobilization of the Ready Re- serves that began the next day. A total of more than 18,000 National Guard and Army Reserve members participated with other Regular and Reserve forces in assisting U.S. postal authorities in getting the mails through. The postal workers soon returned to work, and by April 3 the last of the mobilized reservists were returned to civilian status. The phase-down of U.S. military operations-in Vietnam and the accompanying cutbacks in active force levels caused renewed reliance to be placed on reserve forces. As early as November 1968 Congress, concerned that the Reserve components were not being adequately provided for, passed the Reserve forces "Bill of Rights." Signed into law by President Johnson in December, the act placed upon the service secretaries the responsibility for providing the support needed to develop Reserve forces capable of attaining peacetime training goals and the responsibility for meeting approved mobilization readiness objectives. The act also established the position of Assistant Secretary for Manpower and Reserve Affairs within each of the military departments and gave statutory status to the position of Chief of the Army Reserve. In August 1970 Secretary of Defense Melvin R. Laird emphatically affirmed that the Reserve components would be prepared to provide the units and individuals required to augment the active forces during the initial phases of any future expansion. By mid-1971 the Army's Reserve components had substantially recovered from the turbulence associated with the reorganization and partial mobilization of 1968. Defense Department plans for yet another reorganization, designed to bring the Reserve components troop program into consonance with new organizational concepts emanating from the Vietnam experience, were under way but they did not involve the loss of any major units. Since the President had elected not to call up the Reserve forces in the early stages of the build-up, the main burden of meeting the Army's need for additional manpower in Vietnam had fallen upon the Selective Service system. Increased draft calls and voluntary enlistments rather than a resort to the Reserves swelled the Army strength from 970,000 in mid-1965 to over 1,500,000 in 1968. The Army's divisions increased from sixteen to nineteen during this period and Army appropriations rose from 312 billion in fiscal year 1965 to almost $25 billion in fiscal year 1969. Reliance upon Selective Service to meet the growing requirements of the Army when large Reserve forces were available drew critical comments from both Congress and the public. This would have been true whether the choice had been made by draft board or lottery, or whether it had been based on physical, marital, or educational status. The nub of the matter was that some were selected while others stayed home. On the other hand, there was no practicable way to change this state of affairs, since the armed forces could not use, nor did they need, all those young men eligible for military service. The four-year extension of the draft law in 1967 attempted to eliminate some of the imbalance, and the introduction of a lottery system in late 1969 helped to alleviate the lot of the potential draftee by limiting the period during which he could be selected to one year, but the basic problem remained. The unpopularity of the war in Vietnam among certain members of the draft age group rose as the conflict dragged on, and evidenced itself in a rising number of antiwar demonstrations, draft card burnings, and efforts to avoid military service. Such a climate was not calculated to bring forth enough volunteers to make the draft unnecessary. Problems and Prospects Since the Army was a segment of American society, it had to deal with the same social problems that confronted the nation. The polarization of opinion over the war in Vietnam, the increasing use of drugs by American youth, and the mounting racial tensions in the United States all had their effects on the Army—particularly since most of its men were young and members of the age group most affected by new currents within the larger society. Thus, the widespread opposition to the war in Vietnam that swept over college campuses in the late sixties was reflected in the Army. Some soldiers participated in protests and demonstrations or formed antiwar groups and circulated antiwar literature. While soldier dissent was not an entirely new phenomenon in American military history, the dissent generated by the Vietnam War was more overt than any that had occurred before. Although soldier antiwar activities attracted considerable attention, there is no indication that they caused any loss of combat effectiveness, nor did they create a movement that won general soldier support. The gradual reduction of American troop strength in Vietnam and the falling casualty rates during the 1969-71 period served to lessen the dissent, but it is not likely to subside completely until American involvement in Vietnam ends. The increasing use of drugs in the United States by young people fostered a similar rise in drug usage in the Army, since many recruits had already been exposed to drugs before they entered the service. The situation was compounded by the low cost and easy availability of drugs in many foreign countries, especially in the Far East. Soldiers stationed in Southeast Asia could obtain inexpensive drugs without difficulty under the loose enforcement procedures employed by the local authorities, and the number of soldier users steadily increased. In an effort to identify and treat these men before they returned home, the Army initiated a program in 1971 requiring urine tests for soldiers leaving Vietnam. Addicts became subject to immediate detoxification and then were given follow-up treatment after their arrival in the United States. The Army also conducted a massive drug information campaign warn potential users of the dangers involved. To a large degree the ultimate success of the Army's programs depends upon the effectiveness of the measures being taken by the United States and by other nations to curb the drug traffic.. Another pressing problem that plagued the nation and the Army was racial discrimination. The Army had desegregated its units during the Korean War and gradually improved the status of the black soldier within the service and in the civilian community by insisting that equal treatment be given all soldiers regardless of race or color. Despite the bitter civil rights struggle of the sixties, some progress was made in securing adequate off-post housing and in opening up recreational facilities to all soldiers. Future gains, however, will be closely tied to domestic developments and a possible shift in American racial attitudes. Since soldiers are conditioned by their pre-service environment and training, many held strong beliefs about racial equality when they entered the Army. In combat, race relations tended to be subordinate to the common danger and to the necessity to work together for survival. Under static conditions, however, race relations were sometimes uneasy, much as they were in many American cities, and polarization of black and white soldiers took place in some units at home and abroad, reflecting the split in the civilian society and the rise in militant racial groups. The Army tried to break down this trend by developing better communication between the two groups, by building confidence in the promotion and judicial systems, and by stressing the role of leadership in reconciling the differences that aggravated the tensions. There is no quick solution to this problem, since it is dependent upon the national effort to achieve racial equality. Perhaps the most important change that lies ahead of the Army concerns its future composition. In April 1970 President Nixon proposed that the nation should start to move in the direction of an all-volunteer armed force and the end of Selective Service ". . . as soon as we can do so without endangering our national security." To carry out the President's proposal, the Army, as the service relying most heavily upon the drab, instituted a Modern Volunteer Army Program in October 1970 and appointed in the Office of the Chief of Staff a special assistant with three star rank to supervise it. The principal task is to increase the existing levels of enlistment, reenlistment, and officer retention in both the active and the Reserve forces. Too often the young officer or enlisted man left the Army as soon as his obligation expired. The problem was particularly acute in the combat arms, for although there was an estimated need to triple the general rate of enlistment, the combat arms rate would have to be raised twelvefold if the Army was to become a completely volunteer force in 1973. Besides an enlarged recruiting effort, which included substantial advertising and publicity campaigns, the program consisted of a variety of steps designed to promote professionalism within the Army and to raise the quality of life for its individual members. Improvements in these interrelated areas were designed to raise the Army's ability to attract and retain men of the quality and in the numbers needed. There has been a considerable amount of experimentation in the program in the effort to assemble the right combination of incentives. Visualized by the planners is a gradual reduction in the Army's reliance upon the draft until the final goal—the all-volunteer Army—is achieved. The draft meanwhile has continued, and as in World War II and Korea the Army fighting in Vietnam has been a blend of professionals and citizen soldiers. page updated 27 April 2001 Return to the Table of Contents Return to CMH Online
<urn:uuid:2ce377cc-b771-4fc3-aa8b-ed3ad329fac7>
CC-MAIN-2016-26
http://history.army.mil/books/AMH/AMH-27.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971916
14,205
3.234375
3
Peripartum cardiomyopathy, a form of heart failure that develops late in pregnancy or shortly after delivery, results in a frightening turn of events that can leave new mothers suffering from a lifelong chronic heart condition that is often fatal. Now, investigators at Beth Israel Deaconess Medical Center have discovered clues behind this dangerous condition, providing the first clear evidence that peripartum cardiomyopathy is a vascular disease, brought about by an imbalance of angiogenic proteins in the heart during the peripartum period. Reporting in the May 17 issue of Nature, a team of researchers led by Zoltan Arany ’98, an HMS assistant professor of medicine and an investigator in Beth Israel’s CardioVascular Institute, details the underlying mechanisms for the condition and identifies preeclampsia and multiple gestations as risk factors. In addition, the investigators point out the beneficial effects that proangiogenic therapies could have for women with this vascular disease. Peripartum cardiomyopathy affects approximately one in 1,000 women with no known history of heart disease. Symptoms can be mild to severe, and include shortness of breath caused by the heart’s diminished pumping ability. About one-half of women who develop the condition will spontaneously recover, but others will grow worse, even to the point of requiring a heart transplant. “It’s been a real mystery,” says Arany. “The majority of women who develop this condition are otherwise healthy, even active. We know that the real stressors of pregnancy occur in the first trimester. Why then, are these mothers-to-be developing such serious problems at the end of pregnancy?” Through a series of studies in both animal models and humans, the researchers determined that peripartum cardiomyopathy is a two-hit disease that begins with elevated late-pregnancy signals to prevent angiogenesis, or normal blood vessel growth, and continues when something as yet undiscovered leaves women susceptible to cardiac damage—possibly an infection or genetic predisposition. “This is really a whole new way to think about peripartum cardiomyopathy,” says Arany.
<urn:uuid:cd63dc3c-0555-4931-9b8b-0e34ce81520f>
CC-MAIN-2016-26
http://hms.harvard.edu/news/harvard-medicine/balance
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934568
455
2.90625
3
Insurance Coverage: Know Your Choices What is a Covered Property? Generally, covered properties are divided into four separate categories. The definitions of the property, and the extent of coverage vary by state, company and product. So it is important for the consumer to understand the definitions of the covered property. The four separate categories for your home, as defined by insurance companies, are: 1. Dwelling – The structure of the house is considered a covered property. 2. Other Structures – These are structures that are separate from the house, or connected to the house by a fence, wire or other form of connection, but not otherwise attached to the dwelling, such as a tool shed or detached garage. 3. Personal Property – The contents of your home are your personal property. This includes furniture, appliances and clothing. Not all personal property is covered. Items more appropriately covered under different forms of insurance may have limited or no coverage for loss. These items include, but are not limited to, money, jewelry and firearms. 4. Loss of Use – When a loss occurs due to a covered peril and the dwelling becomes uninhabitable, the cost of additional living expenses is covered. Reimbursement of additional living expenses covers the cost to the insured for maintaining a normal standard of living. “Open Perils” and “Named Perils” Coverage A peril, as referred to in an insurance policy, is a cause of loss, such as fire or theft. Coverage can be provided on an “all perils” basis, or a “named perils” basis. Named Perils policies list exactly what is covered by the policy, while Open Perils (or All Perils) policies will list what is excluded from coverage. Named Perils policies are generally more restrictive. A dwelling policy usually provides coverage for both the dwelling and contents on a named perils basis, while a homeowners policy usually provides coverage for the dwelling on an all perils basis, and for the contents on a named perils basis. Package Versus Peril-Specific Coverage A package policy provides coverage for multiple, but usually not all perils. A homeowners policy, for example, is a package policy typically providing coverage for the perils of fire, lightning, and extended coverage. Extended coverage includes coverage for the perils of windstorm, hail, explosion, riot, civil commotion, aircraft, vehicles, smoke, vandalism, malicious mischief, theft, and breakage of glass. Some policies, such as earthquake or flood policies, provide coverage for specific perils that are often excluded in package policies. Fire and sprinkler leakage damage as a result of an earthquake may be covered by a standard homeowners policy. To purchase the most appropriate insurance, it is important for you to consider what additional perils you may face. And, you should always verify what is covered in your specific policy. Does My Policy Cover That? 1. Earthquakes – Most property insurance policies exclude coverage for losses resulting from earthquakes (although they often cover losses related to fires following earthquakes). Separate policies are typically required to ensure coverage against losses from earthquakes. Some states with risk of loss from earthquakes have government mandated insurance plans that provide earthquake coverage to property owners who are unable to obtain insurance through the voluntary market. (See page 8 for explanation of voluntary and involuntary markets.) 2. Flood – Most property insurance policies exclude coverage for losses resulting from flood. So unless you purchase a flood policy, you do not have coverage for flood losses. (For a more comprehensive discussion of flood insurance, please see Preparing for a Flood, page 25.) 3. Hail – Most property insurance policies provide coverage for losses resulting from hail. Hail is a named peril, meaning for coverage to apply under a “Named Perils” policy, hail must be defined as a covered peril. 4. Hurricanes – Most property insurance policies provide coverage for losses resulting from hurricanes, except for flood loss associated with the hurricane. (See Preparing for a Flood, page 25, for more information.) However, some policies only provide limited coverage for hurricanes, or require that a higher deductible be purchased specifically for the hurricane peril. Most states with risk of loss from hurricanes have government mandated insurance plans that provide hurricane coverage to property owners who are unable to obtain insurance through the voluntary market. (See page 8 for explanation of voluntary and involuntary markets.) 5. Tornadoes – Most property insurance policies provide coverage for losses resulting from tornadoes (although they do not cover losses resulting from the peril of flood; see Preparing for a Flood, page 25, for insurance availability). While tornadoes may not be specifically mentioned as a covered form of loss, tornado losses are one event covered under the broader term windstorm. Windstorm includes tornadoes, straight-line winds and hurricanes. However, there may be instances where coverages and deductibles may apply specifically to hurricane and not to all windstorms.6. Wildfires – All property insurance policies provide coverage for losses resulting from fires. Depending on the level of exposure, you may need to consider a higher deductible to obtain coverage, or keep it affordable. Most states have coverage available via the FAIR plan, or a JUA, if the voluntary market is not willing to provide coverage. 6. Wildfires – All property insurance policies provide coverage for losses resulting from fires. Depending on the level of exposure, you may need to consider a higher deductible to obtain coverage, or keep it affordable. Most states have coverage available via the FAIR plan, or a JUA, if the voluntary market is not willing to provide coverage. How Much Insurance Is Enough? Depending on the type of policy, the different dwelling coverage options could be: 1. Replacement Cost Coverage 2. Actual Cash Value 3. Special Payment - loss is paid before dwelling is repaired, rebuilt or replaced. 4. Functional Replacement Cost or Market Value Coverage - repairs are made using common, modern materials and methods without deduction for depreciation unless repairs are not made, and if a total loss, the payment amount will be the market value of the home. 5. Stated Value - a selected value is established by the insured, and this value is the limit of liability. Depending on the coverage you select at the time of purchase of your policy, if you should incur a loss, the settlement of that loss will vary. A loss can be settled based on a replacement cost, repair cost, or actual cash value basis. Replacement cost is not the market value of your home, nor is it the tax-assessed value. It is the cost to replace the damaged property, with no reduction for depreciation of the damaged property. Actual cash value is the cost to replace the damaged property reduced by an allowance for depreciation. Functional cost or market value (also known as repair cost) is the cost to repair the damaged property with equivalent construction for similar use. An example of functional replacement would be to replace a plaster wall with drywall. If stated value coverage is selected, the maximum amount paid at the time of loss is the value of the policy, even if the loss amount is larger than the value of the policy. Personal Property Coverage Choices Depending on type of policy, the different personal property coverage options could be: 1. Replacement Cost Coverage 2. Actual Cash Value What Does Insurance-to-Value Ratio Mean? This is the relationship of the amount of insurance purchased to the replacement value of the property. It is important to have an accurate assessment of the replacement cost value of your home. If you do not, and then have a loss, the cost to actually replace your home may be more than your insurance policy will provide. That means you would be responsible for covering the difference. Major catastrophes, such as earthquakes, hurricanes, and wildfires can often create a demand surge for materials and labor, resulting in increased costs to replace damaged property. This must be considered when establishing the appropriate replacement cost for your property. Most property policies require that the property be insured to at least 80% of the replacement cost, or loss payments will be reduced by a proportion of the insured value to 80% of replacement value. This is referred to as the coinsurance penalty. It is also important to realize that other limits within your policy are a percentage of the dwelling coverage amount. For example, the limit of coverage for your personal property will usually be at 50% of the dwelling limit. Additional coverage is available via endorsement, and is typically increased if you purchase replacement cost coverage for your contents. Replacement Cost Coverage In order to qualify for replacement cost coverage, you will most likely be required to insure your property to at least 80% of the replacement cost. As long as this requirement is met, and if you have a total loss, your insurance policy will cover the total cost of replacing your home. Further, if the property is not insured to at least the 80% value, then the payment for partial losses may be reduced. Additional Limits in Case of Total Loss Many insurance companies offer an endorsement that will provide the full coverage to replace the property in the event of a total loss. Usually, the company requires that the property be insured to at least 100% of the replacement cost of the property in order to qualify for this additional coverage. As long as this requirement is met, if you have a total loss and it costs more to replace than your limit (from a misestimate or demand surge), your insurance policy will be increased. The amount of the increase depends on the endorsement purchased, and can be anywhere from 25% to 100%. Additional coverages may either be included in your policy, or available for a separate price. Coverages like building code upgrades, which provide coverage for upgrades that the community requires for building codes when a home is being repaired or rebuilt as a result of a covered loss, may be available separately. Also, optional coverage for perils, such as earthquake insurance, is often purchased to supplement a homeowners policy.
<urn:uuid:d443b637-bfee-4f14-9709-b643cc827727>
CC-MAIN-2016-26
http://homeownersinsuranceguide.flash.org/knowyourchoices.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940913
2,048
2.6875
3
- About us - Code of Conduct - Google SoC - Recent posts - Security Workshops Buffer overflow, cross site scripting and sql injection have had their share of the spotlight, I have recently decided to give more attention to layer two issues and share my findings. Some of the reasons that attracted me to layer two security is that there is a high percentage of insiders attacks by employees, the threat is under estimated and what is within the LAN is considered "trusted". Also more broadband providers deploy network access based exclusively on layer two (for fast recovery, the average convergence time for RSTP is far greater than OSPF and EIGRP ). A DOS attack takes another damaging dimension in the data link layer than network layer. The OSI layer was built to allow different layers to work without the knowledge of each other, that means that if a layer is compromised the other layers will not be aware. It is finally worth mentioning that the boundaries of the LAN are no longer physical as wireless networks are used in the environment. The first attack time that you might be familiar with is MAC flood A.K.A CAM table overflow, (do not confuse CAM table <MAC Vs port> and ARP table <IP Vs MAC>) using tools such as macof (available since 1998) it can generate 155,000 MAC/min. the average switch CAM table range between 16 to 128Kb, even with a auto refresh of the CAM table the default in most switches is generally around 5 min. Mitigating this attack would be limiting the amount of MAC addresses learned per port. ARP spoof (ARP poisoning) this attack targets the ARP table and basically takes advantage of the fact that any machine can claim to be any IP, ettercap tool is very easy to launch this attack.Mitigation of this attack can be using static arp entries (an administrative burden for large networks) or using DHCP snooping and dynamic ARP inspection that builds a binding table and when new gratuitous arp entries are received they are compared to that table. VLAN Hopping -- Switch spoofing this is a straightforward attack for ports that are configured for trunking or negotiate trunking and yet users are connected to them, the attacker can spoof a switch using ISL or 802.1q and as trunk ports have access to all VLANs Voila!, a tool that can enumerate a switch is Yersinia or if you are familiar with brctl. Mitigation is secure configuration disabling unused ports, setting used ones to access mode only. VLAN Hopping -- double tagging this attack is more complex and requires specific setup and has its limitations It only works on 802.1q trunking standard it works even if the port is set to access only, as the switches do only one level of untagging when the attacker double tags a packet the switch forwards it to the other switch with a tag, the attack is limited to being unidirectional, and cannot work on a target on the same switch as the attacker and the attacker and trunk must have the same native VLAN. STP manipulation the spanning tree protocol prevents layer 2 loops from being formed when switches are interconnected via multiple paths for redundancy, switches exchange BPDU messages to elect a root bridge, elect a per VLAN designated bridge and in the case of topology change. The pitfalls of STP/RSTP is the lack of authentication in the BPDU messages, if an attacker impersonates a switch and keep sending topology change BPDUs the other switches will enter an infinite loop of recomputing the algorithm. Mitigation of this type of attack is by enforcing the placement of root bridges in the network (Root guard) or disabling the use of priority zero on users port (BPDU guard). Private VLANs to create multiple LANs on the switch VLANs was the solution, to restrict communication between ports in the same VLAN, Private VLANs was the solution is uses port roles (Isolated, Promiscuous and community) to restrict communication between ports. this can be bypassed used a layer 2 proxy attack in which a packet is sent with the destination of the target IP address and the MAC address of the gateway, since switches are working in the MAC level and routers are on the IP level. this attack is unidirectional and can be prevented by configuring an access list on the router to prevent communication between the IPs of its LAN. DHCP starvation is done by sending DHCP requests with spoofed MAC addresses to exhaust the DHCP server IP pool, according to RFC 2131 a hacker can introduce a rogue DHCP server assigning IPs to clients while the real DHCP server is up and running, allowing a MITM attack.RFC 3118 defines DHCP authentication but there has not been an implementation of it. Configuration best practices Sami Kamel Guirguis
<urn:uuid:a1806c63-a729-4ef6-9479-27daeac8015b>
CC-MAIN-2016-26
http://honeynet.org/node/384
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907735
1,028
2.59375
3
June 13, 2013 Space Telescope Science Institute, Baltimore, Md. NASA'S HUBBLE UNCOVERS EVIDENCE OF FARTHEST PLANET FORMING FROM ITS STAR WASHINGTON — Astronomers using NASA's Hubble Space Telescope have found compelling evidence of a planet forming 7.5 billion miles away from its star, a finding that may challenge current theories about planet formation. Of the almost 900 planets outside our solar system that have been confirmed to date, this is the first to be found at such a great distance from its star. The suspected planet is orbiting the diminutive red dwarf TW Hydrae, a popular astronomy target located 176 light-years away from Earth in the constellation Hydra the Sea Serpent. Hubble's keen vision detected a mysterious gap in a vast protoplanetary disk of gas and dust swirling around TW Hydrae. The gap is 1.9 billion miles wide and the disk is 41 billion miles wide. The gap's presence likely was caused by a growing, unseen planet that is gravitationally sweeping up material and carving out a lane in the disk, like a snow plow. The planet is estimated to be relatively small, at 6 to 28 times more massive than Earth. Its wide orbit means it is moving slowly around its host star. If the suspected planet were orbiting in our solar system, it would be roughly twice Pluto's distance from the sun. Planets are thought to form over tens of millions of years. The buildup is slow, but persistent as a budding planet picks up dust, rocks, and gas from the protoplanetary disk. A planet 7.5 billion miles from its star should take more than 200 times longer to form than Jupiter did (no more than 10 million years by current estimates) at its distance of 500 million miles from the sun because of its much slower orbital speed and the deficiency of material in the disk. TW Hydrae is only 8 million years old, making it an unlikely star to host a planet, according to this theory. There has not been enough time for a planet to grow through the slow accumulation of smaller debris. Complicating the story further is that TW Hydrae is only 55 percent as massive as our sun. "It's so intriguing to see a system like this," said John Debes of the Space Telescope Science Institute in Baltimore, Md. Debes leads a research team that identified the gap. "This is the lowest-mass star for which we've observed a gap so far out." An alternative planet-formation theory suggests that a piece of the disk becomes gravitationally unstable and collapses on itself. In this scenario, a planet could form more quickly, in just a few thousand years. "If we can actually confirm that there's a planet there, we can connect its characteristics to measurements of the gap properties," Debes said. "That might add to planet formation theories as to how you can actually form a planet very far out." The TW Hydrae disk also lacks large dust grains in its outer regions. Observations from the Atacama Large Millimeter Array in the Atacama desert of northern Chile, show dust grains roughly the size of a grain of sand are not present beyond about 5.5 billion miles from the star, just short of the gap. "Typically, you need pebbles before you can have a planet. So, if there is a planet and there is no dust larger than a grain of sand farther out, that would be a huge challenge to traditional planet formation models," Debes said. The team used Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) to observe the star in near-infrared light. The researchers then compared the NICMOS images with archival Hubble data and optical and spectroscopic observations from Hubble's Space Telescope Imaging Spectrograph (STIS). Debes said researchers see the gap at all wavelengths, which indicates it is a structural feature and not an illusion caused by the instruments or scattered light. The team's paper will appear online on June 14 in The Astrophysical Journal. For images, illustrations, and more information about TW Hydrae, visit: For more information about NASA's Hubble Space Telescope, visit:
<urn:uuid:050aa219-56fc-497b-81d3-e29ed9299115>
CC-MAIN-2016-26
http://hubblesite.org/newscenter/archive/releases/2013/2013/20/text/results/100/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930215
863
3.484375
3
Activities & Events Cub Scouting is an active program—boys learn by doing, and there's no end to the fun things that Cub Scouts do in their dens, as a pack, and at special events. As a leader, you should be aware of the wide array of activities that can be included in the Cub Scout program, and do your best to include as many of them as possible in your annual schedule. A den is a group of six to eight boys that meets several times a month between pack meetings. The den structure allows boys to build relationships with leaders and other boys. The monthly pack meeting brings together boys from every den, their leaders, and their families, to participate in a large-scale event that serves as a showcase for all that the boys have learned and done in their individual den meetings. Cub Scout Camping Organized camping is a creative, educational experience in cooperative group living in the outdoors. It uses the natural surroundings to contribute significantly to physical, mental, spiritual, and social growth. Cub Scout Outdoor Program Highlights This section in the annual Cub Scouting Highlights provides an overview of outdoor program opportunities for Cub Scouts. Leave No Trace Frontcountry Guidelines Leave No Trace is a plan that helps people to be more concerned about their environment and helps them protect it for future generations. This plan applies in a backyard or local park as much as it does in the wilderness. Excursions and Field Trips Cub Scouts enjoy many outdoor experiences as they participate in the variety of activities that can be held outside, such as field trips, hikes, nature and conservation experiences, and outdoor games. Blue and Gold Banquets During the month of Scouting's anniversary, packs across the country hold blue and gold banquets. In nearly all packs, the banquet is a highlight of the program year. Cub Scout Derbies Cub Scout derbies—the pinewood derby, raingutter regatta, and space derby—teach Cub Scouts craft skills, the rules of fair play, and good sportsmanship. Participating in service projects as a group is one of the ways in which boys in Cub Scouting fulfill their promise "to help other people." District and Council Activities Your local council or district office may schedule activities in which all the packs in your area are invited to participate. Age-Appropriate Guidelines for Scouting Activities These program-specific criteria are designed to assist unit leaders in determining what activities are age-appropriate for their participants. Nationally Approved Historic Trails Search our online listings of more than 300 trails have been approved for Tiger Cubs, Cub Scouts, Boy Scouts, Varsity Scouts, Venturers, and family campers. Jamboree on the Air (JOTA) Cub Scout Dens may participate in Jamboree-on-the-Air, an annual Scouting and amateur radio event that creates contact among Scouts from around the nation, and around the world. Guide to Safe Scouting Use the Guide to Safe Scouting to help you plan and conduct Scouting activities in a safe and prudent manner.
<urn:uuid:5cdff609-34eb-4777-9392-a4a81ad0bff1>
CC-MAIN-2016-26
http://iac-bsa.org/Home/CubScouts_Old/Leaders/Activities
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946258
630
2.515625
3
The Holuhraun eruption continues this morning, but earthquake activity seems to be somewhat less than yesterday. According to webcam observations there is no visible change in activity since yesterday with effusive lava eruption and fountains. As reported, the biggest earthquake measured occurred early in the night, 3.1 in magnitude with 110 quakes detected. Most of the quakes, including the biggest one, were located in the northern part of the magma intrusion with some activity extending under the glacial rim. Stöð 2 TV interviewed volcanologist Ármann Höskuldsson yesterday: “The eruption is similar to what is was yesterday [August 31]. The flow of lava is equivalent to a little more than Ölfusá river [the big river flowing through the town Selfoss in South Iceland].” Ármann continued: “This eruption is similar to the 1984 Krafla eruption, with lava fountains stretching as high as 60 to 70 meters [around 200 feet] up in the air. Still the whole fissure has been on fire for more than 24 hours, so this is probably a bit bigger.” It is clear that the future remains a big question mark. “The eruption may continue for a week or even a month. To predict what happens next we have to investigate GPS measurements to see if the pressure below has been reduced. If not, the eruption can just go on and on.” The heat of the lava is over 1,000°C degrees (1,800°F) and very difficult to be up close to. Toxic gases are also a concern of the scientists. Hydrogen sulfide levels from the lava have increased since yesterday. “You find the sulfur taste in your mouth, and if we stay much longer this will become sulfur acid in your lungs”, Ármann said and was on his way.
<urn:uuid:4f09be72-cc23-4de2-9084-9cb3216ee965>
CC-MAIN-2016-26
http://icelandreview.com/news/2014/09/02/eruption-may-last-week-or-even-month?page=7
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973849
395
2.671875
3
A major change is underway in where and how we are choosing to live. In 2011, for the first time in nearly a hundred years, the rate of urban population growth outpaced suburban growth, reversing a trend that held steady for every decade since the invention of the automobile. In several metropolitan areas, building activity that was once concentrated in the suburban fringe has now shifted to what planners call the “urban core,” while demand for large single-family homes that characterize our modern suburbs is dwindling. This isn’t just a result of the recession. Rather, the housing crisis of recent years has concealed something deeper and more profound happening to what we have come to know as American suburbia. Simply speaking, more and more Americans don’t want to live there anymore. (MORE: Do The Suburbs Make You Selfish?) The American suburb used to evoke a certain way of life, one of tranquil, tree-lined streets, soccer leagues and center hall colonials. Today’s suburb is more likely to evoke endless sprawl, a punishing commute, and McMansions. In the pre-automobile era, suburban residents had to walk once they disembarked from the train, so houses needed to be located within a reasonable distance to the station and homes were built close together. Shopkeepers set up storefronts around the station where pedestrian traffic was likely to be highest. The result was a village center with a grid shaped street pattern that emerged organically around the day-to-day needs and walking patterns of the people who lived there. Urban planners describe these neighborhoods, which you can still see in older suburbs, as having “vibrancy” or “experiential richness” because, without even trying, their design promoted activity, foot traffic, commerce and socializing. As sociologist Lewis Mumford wrote, “As long as the railroad stop and walking distances controlled suburban growth, the suburb had form.” Then came World War Two, and the subsequent housing shortage. The Federal Housing Administration had already begun insuring long-term mortgage loans made by private lenders, and the GI Bill provided low-interest, zero-down-payment loans to millions of veterans. The widespread adoption of the car by the middle class untethered developers from the constraints of public transportation and they began to push further out geographically. Meanwhile, single-use zoning laws that carved land into buckets for residential, commercial and industrial use instead of having a single downtown core altered the look, feel and overall DNA of our modern suburbs. From then on, residential communities were built around a different model entirely, one that abandoned the urban grid pattern in favor of a circular, asymmetrical system made of curving subdivisions, looping streets and cul-de-sacs. But in solving one problem—the severe postwar housing shortage—we unwittingly created some others: isolated, single-class communities. A lack of cultural amenities. Miles and miles of chain stores and Ruby Tuesdays. These are the negative qualities so often highlighted in popular culture, in TV shows like Desperate Housewives, Weeds and Suburgatory, to name just a few. In 2011, the indie rock band Arcade Fire took home a Grammy for The Suburbs, an entire album dedicated to teen angst and isolation inspired by band members’ Win and William Butler’s upbringing in Houston’s master-planned community The Woodlands. Although many still love and defend the suburbs, they have also become the constant target of angst by the likes of Kate Taylor, a stay-at-home mom who lives in a suburb of Charlotte and uses the Twitter name @culdesacked. “If the only invites I get from you are at-home direct sales ‘parties,’ please lose my number, then choke yourself. #suburbs.” There is still a tremendous amount of appeal in suburban life: space, a yard of one’s own, less-crowded schools. I don’t have anything against the suburbs personally—although I currently live in Manhattan’s West Village, I had a pretty idyllic childhood growing up in Media, Pennsylvania, a suburb twelve miles west of Philadelphia. We are a nation that values privacy and individualism down to our very core, and the suburbs give us that. But somewhere between leafy neighborhoods built around lively railroad villages and the shiny new subdivisions in cornfields on the way to Iowa that bill themselves as suburbs of Chicago, we took our wish for privacy too far. The suburbs overshot their mandate. Many older suburbs are still going strong, and real estate developers are beginning to build new suburban neighborhoods that are mixed-use and pedestrian-friendly, a movement loosely known as New Urbanism. Even though almost no one walks everywhere in these new communities, residents can drive a mile or two instead of ten or twenty, own one car instead of two. “We are moving from location, location, location in terms of the most important factor to access, access, access,” says Shyam Kannan, formerly a principal at real estate consultancy Robert Charles Lesser and now managing director of planning at the Washington Metropolitan Area Transit Authority (WMATA.) As the country resettles along more urbanized lines, some suggest the future may look more like a patchwork of nodes—mini urban areas all over the country connected to one another with a range of public transit options. It’s not unlike the dense settlements of the Northeast already, where city-suburbs like Stamford, Greenwich, West Hartford and others exist in relatively close proximity. “The differences between cities and suburbs are diminishing,” says Brookings’ Metropolitan Policy Program director Bruce Katz, noting that cities and suburbs are also becoming more alike racially, ethically, and socio-economically. Whatever things look like in ten years—or twenty, or fifty, or more—there’s one thing everyone agrees on: there will be more options. The government in the past created one American Dream at the expense of almost all others: the dream of a house, a lawn, a picket fence, two or more children, and a car. But there is no single American Dream anymore; there are multiple American Dreams, and multiple American Dreamers. The good news is that the entrepreneurs, academics, planners, home builders and thinkers who plan and build the places we live in are hard at work trying to find space for all of them. Adapted from The End of the Suburbs: Where the American Dream is Moving by Leigh Gallagher, in agreement with Portfolio, an imprint of Penguin Random House. Copyright (c) Leigh Gallagher, 2013.
<urn:uuid:89f5b162-0fbe-4db8-a4a7-8405f2c5383f>
CC-MAIN-2016-26
http://ideas.time.com/2013/07/31/the-end-of-the-suburbs/?iid=sp-article-mostpop2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956589
1,377
2.515625
3
Night Vision Goggles in Civilian Aviation by G. J. Salazar, M.D. and Van B. Nakagawara, O.D. Article reprinted with permission of FAA Aviation Night vision goggles? Aren't they for the military and police? Not anymore! On January 29, 1999, the FAA issued the first Supplemental Type Certificate (STC) to permit use of night vision goggles by a civilian helicopter EMS (emergency medical service) operator. Since then several more have been issued to other commercial operators. In addition, rulemaking was initiated (but at the time of this writing is temporarily on hold) for changes to FAR Part 91 that would permit use of this technology by general aviation pilots. With this in mind, it will only be a matter of time before pilots start hearing more and more about these significant aids to night flying. Therefore, it is important for pilots to become aware of this technology and understand some of the basic operational issues. NIGHT VISION GOGGLES Night vision devices include a variety of different technologies, such as forward-looking infrared radar (FLIR) and night vision goggles. The focus of this article will be on night vision goggles, more commonly known by the acronym NVG. The simplest analogy to explain how NVG's work is a video camera. The basic principle is the same in that the user is not directly seeing what they look at, but rather is viewing an electronic image of the scene. NVG equipment may be monocular or binocular. However, in aviation, binocular, helmet-mounted equipment is almost exclusively used. Like a video camera, an NVG is an electro-optical device. Electromagnetic energy, both visible and infrared, reflected from the terrain at night enters the NVG through the objective lens. These photons of light energy are directed to an electronic processing unit called the image intensifier, which contains several components. The photocathode element in the image intensifier converts the light photons to electrons and moves them to the microchannel plate (MCP) which accelerates and multiplies them several thousand times. The electrons then strike the phosphor screen, which is ultimately responsible for emitting the visible light the user will see through the eyepiece lens as a focused image. Unlike the video camera, the NVG does not require much light to produce an image. Light as faint as a starlight or low-level moonlight will suffice. However, the efficiency of the equipment will be degraded in total darkness or with too much light. The image intensifier will increase what little light energy there is on average several thousand times. State-of-the-art NVG's are capable of intensification on the order of 35,000 times or more. That amplified or intensified energy is projected onto the phosphor screen, which creates the visible image the user-sees through the eyepieces. The NVG image is monochrome, i.e., in one color, typically either green or amber depending on the type of phosphor used. NVG equipment lacks the ability to produce a multi-color representation of a scene. Aviation NVG models are helmet-mounted with electrical power supplied by a battery pack attached to the back of the helmet. As with any optical device, the user has a variety of ways of adjusting fit and focus. The NVG binoculars and mounting assembly are cumbersome, weighing approximately one pound. In addition, one must factor in the weight of the helmet and battery pack. ADVANTAGE OF NVG's The advantages of this night vision aid technology in aviation can be summed up as an increase in nighttime situational awareness for pilots. This technology does not turn night into day, but it does permit the user to see objects that normally would not be seen by the unaided eye. This would markedly decrease the possibility of collisions with terrain or man-made obstructions. Many other benefits exist, but the bottom line is that this technology, when properly used, has the potential to significantly increase nighttime flying safety. DISADVANTGAGES OF NVG's Unfortunately, this increase in safety comes with a significant price. Some of the disadvantages of NVG's include: - decreased field of aided view - decreased visual acuity - loss of depth perception - lack of color discrimination - neck strain and fatigue - high initial cost to purchase - require on-going maintenance - need for recurrent training - requires modification of aircraft lighting Current NVG's provide approximately 40 to 60 degrees of aided nighttime circular field of vision, although the user retains some unaided vision by being able to look peripherally around or under the goggles. With a reduced field of vision, effective scanning techniques are even more important than with unaided vision alone. Because one is looking at an electronic image, depth perception is lost. The user must learn to recognize terrain contrast and shadowing to replace some of the lost depth perception cues. Thus, the ability of the pilot to determine precise closure on terrain or other aircraft when these are first detected is limited. Low-light level operations inherently produce decreased visual resolution, acuity, and contrast, thereby making hazard detection more difficult. Visual acuity from NVG devices provides a vast improvement over unaided human night vision, which can be 20/200 or worse. With properly focused goggles at starlight or quarter moon, one can have nighttime visual acuity equivalent to 20/40 or 20/30. The latest generation of goggles can achieve 20/25; however, this is difficult to accomplish in an operational setting. Enhanced vision with NVG's is proportional to altitude and airspeed. With NVG's, "lower and slower" improves visual acuity. Therefore, a helicopter pilot would have some advantage over his or her fixed-wing counterpart in determining terrain features in low light conditions. In addition, newer generation equipment provides greater contrast detection, thereby improving situational awareness. It is important to note that NVG-aided acuity of 20/30 or 20/40 assumes proper cockpit lighting, properly focused and well-maintained goggles, and ideal environmental conditions. As mentioned previously NVG's produce monochrome images. Because the eye can differentiate more shades of green than other phosphor colors, the night vision phosphor screen is typically green. This allows the user to see more detail, but with an inability to detect differences in color. Changing illumination can affect visual acuity. External incompatible light from the ambient environment could result in "washout" or halo effects, when using NVG's. This could result in glare, flash blindness, and afterimage for the pilot. Particularly troublesome is ensuring aircraft and cockpit lights are NVG-compatible. Incompatible lights make the outside scene less visible with NVG's. Changing cockpit lights to be NVG compatible is very complicated and expensive. NVG's are sensitive to light ranging from yellow-green to near-infrared wavelengths. FAA required aircraft position and anti-collision lights could cause problems for goggle wearers. NVG's are also subject to interference by environmental factors, such as rain, clouds, snow, mist, dust, smoke, and fog. In anything more than very small amounts, any of these will tend to severely degrade the performance of the equipment. During prolonged use of helmet-mounted NVG devices, the potential for neck discomfort and other problems, such as increased general fatigue, exists because of the weight of the helmet, battery pack, and NVG In summary, while NVG and other night vision technology are potentially great safety enhancements for select nighttime flight operations, they are an expensive and sophisticated pieces of equipment requiring considerable effort to implement and maintain. Night vision goggles do not turn night into day and if not properly used, rather than preventing accidents they could be the cause of one. Operational use of these devices should be accomplished only after pilots have received extensive, supervised ground and in-flight training with the equipment. Once trained pilots must strive to maintain proficiency by ongoing use and recurrent training. G. J. Salazar, M.D. is the Regional Flight Surgeon in FAA Southwest Region, Fort Worth, Texas. Van B. Nakagawara, O.D., is a Research Optometrist at FAA's Civil Aeromedical Institute in Oklahoma City, Oklahoma.
<urn:uuid:617e6708-3f32-4ace-b48a-9bbb6098a0be>
CC-MAIN-2016-26
http://iflyamerica.org/nightvision.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.901565
1,891
3
3
PRIMAL ELEMENTS : THE ORAL TRADITION This is not an in-depth study of ancient Chinese cosmogony as it requires a specialization of ancient philosophical writings and a close acquaintance with the rich folklore of China. However, it can be said for certain that the theories of yin and yang and the five elements are a part of the ancient Chinese cosmogony. Shangshu (Book of Documents) contains the earliest textual reference of wuxing (five elements). Chinese civilization was born with agriculture. Here Shangshu speaks of the five elements in the words of an ancient Chinese farmer who also had handicraft skills in making wood and metal into implements. Kong Yinda (ad 574-648) was an important Tang court-scholar who was one of the ancient authorities in expounding the Confucian cultural tradition. While annotating Liji (Book of Rites), Kong wrote that water was created in the eleventh month, fire in the sixth, wood in the first, metal in the eighth and earth in the third month.2 In agriculture, life was incorporated into the seasonal changes of the year which came to mankind repeatedly as a routine. Kong Yinda’s comments are appended to one of the three chapters on yueling (lunar order) which deal with the twelve lunar months of the year. Kong Yinda, in his commentary on the Shangshu passage cited above, elaborates the ancient Chinese creation of the five elements in the xici (preface) of Yijing (Book of Change). This, said Kong, was the numbering in which the five elements were created. He continues: At this point, Heaven and Earth were without spouse. In this way yin and yang found their matches, and the beings of the universe took their forms.3 Jin Chunfeng, a modern Chinese scholar, has tried to find a diagram which can categorize traditional Chinese thought. He constructed the yueling tushi (diagram of lunar order) with the yin and yang and the five elements as its nucleus. He thinks that earth was the centre of the agricultural economic activities in ancient times, hence being placed in the centre (while wood, metal, fire, water are placed in the east, west, south, north respectively). The ancient Chinese linked space with time. East was linked with spring, south with summer, west with autumn, and north with winter. Earth controlled the four seasons and was the identification of men. Jin thinks that the above-mentioned identifications have outlined the ecological environment of the cradle of Chinese civilization in the valley of the middle stream of the Yellow River (during the Xia, Shang and Zhou dynasties). Men in this country noticed that when the east wind blew there came spring, therefore the east created spring and wood. In summer, hot wind came from the south which was the creator of fire and summer. Autumn was the harvest season. When the west wind blew, the crops turned golden, hence west with autumn and metal was taken in one category. Winter brought cold wind from the north, and marked the time of hiding and storing which characterized water.4 Lushi chungiu (Lu Buwei’s edition of Spring and Autumn) associates the five elements with five colours, and also with the development of ancient Chinese history. According to the textual tradition, Chinese civilization began with the Yellow Emperor (Huangdi). During the time of Huangdi the spirit of earth prevailed and its colour was yellow — hence the yellow earth and Yellow Emperor. During the time of Yu (the first king of the Xia dynasty), there was an exuberant vegetation in all the seasons which presented the colour of wood, i.e., green. During the time of Tang (the first king of Shang dynasty), the spirit of metal prevailed, presenting white colour. During the time of King Wen of Zhou, the spirit of fire prevailed, presenting red colour. Fire would be replaced by water which would present black colour. Water would, then, be absorbed by earth, and the rotation of the five elements would continue endlessly till eternity.5 The representative colours of the various seasons are the visual representations of the ecological environment. It makes sense to associate spring with green and wood as it is the season of growth of vegetation. Summer is associated with red and fire, as it is the hottest season with maximum sunshine. Associating autumn with white and metal needs a little more imagination. This probably had something to do with the withering of vegetation and transformation of a rich colourful world into whitish grey. Similarly the association of winter with water and black may be explained by the cold which forced people into hiding, thus bringing darkness. Besides this, black or other dark colours are used more often in winters. The historical reference in Lushi chungiu is also interesting. The Chinese civilization began with the beginning of agricultural pursuit which was to get some yield from the yellow earth, with the Yellow Emperor being a typical symbol. Then, plantation brought about the increasing importance of wood and prevalence of green colour during the time of Yu which showed a stage of economic advancement. Further advancement was made by the utilization of metal during the time of Tang (which, according to Chinese tradition, was nearly a thousand years after Yu). The Shang dynasty founded by Tang was a period of magnificent bronze wares which are found in almost all the major museums of the world. King Wen of Zhou ushered in a new era of more brisk human activities with larger territories and greater population brought under the pale of Chinese civilization. The prevalence of fire signified cooking, lighting, handicraft industry and war. Dong Zhongshu (176-104 bc), the famous Han prime minister who was responsible for creating a state ideology in the name of Confucius, used human temperament to analyze seasonal changes. Spring was the expression of happiness, there was warmth. The sun was the embodiment of joy and ecstasy which created summer. Autumn was created by anger. Sorrow made winter which was dominated by the concentration of yin, just like the sun marked the concentration of yang. Thus, spring was the spirit of love, summer was of joy, autumn of anger, and winter of sorrow. The spirit of love meant creation; that of joy meant growth; that of anger, success; that of sorrow, death and end. He concluded that: The Eight Trigrams (bagus) is a special growth of the Chinese cultural tradition in which both the yin and yang and the five elements play a vital role. There is no phenomenon in the universe which cannot be explained by the experts of the Eight Trigrams. Special theories of the working of the Eight Trigrams are found in the Chinese lunar calendar, and in the life of the users of the calendar even to this day of modern science and technology. Each day of the calendar is allotted one of the five elements, which dominates for two consecutive days and give way to the next in a rotation which works in the following manner : Chinese fortune-tellers essentially use the five elements for their calculations. The five elements form a circle of one constraining the other: water constrains fire, fire constrains metal, metal constrains wood, wood constrains earth, earth constrains water. As each birth sign is associated with one of the five elements, matchmakers would normally ensure that the wife’s element does not constrain that of the husband as it would result in endless family trouble. But the element of the husband constraining that of the wife will be regarded natural and logical. The five elements along with other concepts form the Chinese non-alphabetic script. A large number of the Chinese written characters have one of them forming a part of the stroke combinations. According to folklore a person born in a particular year should prefer some ideographic parts in his written name. For instance, a person born in the year of Tiger can be blessed with a gentle and sagacious nature, and can achieve fame and richness if his/her name has both metal (jin) and wood (mu) in it. A person born in the year of Monkey will be romantic and optimistic if he/she has water as a part of his/her name. The holy book of Taoists, Daodejing, traces Tao as the creator of the universe. Tao has created the five elements by its movements, revolutions, dynamics and motionlessness. It has created the yin and yang and everything. Taoism as a religion has absorbed many of the domestic Chinese cults which have connections with ancient Chinese cosmogony. Taoists are ardent worshippers of the Earth-God. A variation of the Earth-God is the Wall-God (chenghuang) which guards the walled towns. Taoists also worship the Kitchen-God. According to legend, one of the culture heroes of China was Yandi who was the god of fire. After he died he became the kitchen. The five elements are also five star-gods in the Taoist tradition. The Wood-star is called Suixing, the Earth-star Zhenxing, the Metal-star Taibaixing, the Water-star Chenxing, and Fire-star Yinghuoxing. The four famous Taoist supernatural animals, also called zhenmushou (guardian-angels of the graves), are Blue-dragon, Red-bird, White-tiger, and Xuanwu. The Blue-dragon guards the east, hence blue or green. The Red-bird guards the south, hence red. The White-tiger guards the west, hence white. Xuanwu guards the north, hence black. The four different colours are those of wood and spring, fire and summer, metal and autumn, water and winter, respectively. The Yi nationality which populates the four southwestern provinces of Sichuan, Yunnan, Guizhou and Guangxi, worships sun, moon, Water-God, Fire-God, Mountain-God, Stone-God and Heaven-God. People believe in Water-God as the controller of rains, Fire-God as the force to dispel evil spirits, Mountain-God as a protector of men from the attack of wild animals, and Stone-God as a guardian-angel against theft and children’s diseases. Another very small nationality called Bulang in the Bulang Hill area in Yunnan (with a population of less than one lakh) worships Fire-God and Earth-God. People worship Fire-God for protection from fire. They worship Earth-God for safety to human life and for a bumper harvest. They conceive a Water-spirit which has a human-head with a snake-body. The Water-spirit comes out for mischief during heavy rains and flood. They also worship the Mountain-God to protect them and give them prosperity, as they are mountain-dwellers leading difficult lives. Many of them are Buddhists; but Buddhism does not conflict with their traditional beliefs. The Bai nationality which resides in Yunnan, Guizhou and Hu’nan have an earth-breaking ceremony in the spring festival every year in order to have good weather and good harvest. People also worship the Mountain-God for good crops and protection of the domestic animals. The Miao race which was a major native population of south China and is now spread in Yunnan, Guizhou, Guangxi, Hu’nan, Sichuan, Hubei, has a strong Earth-God worship. Every village has a temple for the Earth-God. The temple for the Earth-God is equally popular among the Han residents of south China. A teleological approach has been used here to study the five elements, and to ponder upon the purpose of the ancients in such an analysis. The major approach of ancient Chinese thinking was to synthesize human activities with their natural environment, which contrasts with the European approach of isolating various objects to gain a deeper insight into their nature and dynamics. Many Chinese feel that these two approaches have led the Europeans to develop modern science and technology, while such a development escaped China. The Chinese approach has been holistic. Although people did observe natural phenomena, they established too early an organic linkage between man and nature. One aspect of this man-nature synthesis was to humanize nature, attributing a human character to natural changes. Conversely, the other aspect of synthesis subjected men under the domination of nature, to bind human activities to movements of the sun, moon and stars. Some modern Chinese scholars, like Jin Chunfeng, do not disparage the Chinese holistic tradition. Jin thinks it provides a very ideal scientific approach. First, the approach does not concentrate on isolated individual entities, but on the entirety or the system as a whole. Second it is not static, but dynamic, grasping the movements of the objective entities within the evolutions of time and qi (ether). Third, it does not take into consideration the inner structure and composition of an object, but on its function and nature. Since every object is a process of flowing and revolving, it only maintains a temporary stability which should not be mistaken as a fixed structure. Fourth, the approach does not eye on the functions and natures of the parts, but on the functions and responses of the whole. Fifth, it does not pay attention to the geometric models and trajectories, but tries to size up the entire developmental trend of the objects.7 Jin illustrates these characteristics of the Chinese tradition by the example of Chinese medicine. Chinese pathology treats every organ of the body as a moving process, as the entire body is in a process of decaying, like the river flowing downwards. Stability is viewed as in a state of ephemeral whirlpool.8 Chinese medicine treats ailment as a disturbance of the natural equilibrium in the body, and tries to send input to the body to sustain its vitality to slow down the process of decay. After all, the human body, like all other beings in the universe, is the combination of yin and yang and the five elements. I would like to make clear that I am neither a believer of the five elements and yin and yang, nor sceptical about the expertize in them which I have no share in me. There could be some similarities between the Chinese ancient and tribal cosmogonies and those of India. Indian scholars are welcome to explore the field. ©1995 Indira Gandhi National Centre for the Arts, New Delhi
<urn:uuid:7539e848-a6b3-42b8-af88-7a50fc37ef5d>
CC-MAIN-2016-26
http://ignca.nic.in/ps_01005.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961837
2,972
3.421875
3
20 de junho de 2011 Although we all know what we should do in our studies, work, social life, family life etc, it is natural for human beings to expect recognition for the tasks well done. In psychology this recognition is known as Validation. Everyone becomes happy to receive a validation. Validation is good for the soul and the ego of the person; it increases self-esteem and sense of worth of its recipient. There are several forms of validation. Validation ranges from a simple praise, words of encouragement, Merit medals, certificates, awards, trophies, gifts, as well as various forms of incentives. All parents should learn the importance of validation and should validate their children when they get or do something with good results. This attitude encourages children and makes them more self-reliant. Validation is also important in a love relationship. The wife praising good pasta or a delicious barbecue prepared by her husband is a validation. This type of validation helps in sustaining a long lasting marriage. The teacher writing "Congratulations!" in the exam or in a school work well done by his student is a form of validation that is sure to make the student happy, no matter what her age is. This validation encourages her to continue her studies in the best possible way. Validation legitimizes, confers value, confirms, emphasizes, recognizes and enhances the other person. It is one of the best things we can give to someone, as by doing so, we raise their self-esteem and we give them confidence to continue doing their best. The big corporations are well aware of the importance of validation and therefore they promote annual awards and even monthly awards as "employee of the month" as a way to stimulate the employees and thus increasing productivity. Go on, through your life, validating.... 11 de junho de 2011 We all know that suicide is the act of ending one's own life. Usually, when one talks about or a news is published about a suicide, we find phrases like, “jumped off from the bridge”, “hung himself”, “cut his wrists”, etc. There are many ways to kill one’s self. Generally, the concept of suicide ingrained in our unconscious, is the deliberate and abrupt act of ceasing one’s existence. However, we must always remember that suicide may also be a slow process, that most people who practice it are not aware of it. A person who suffers from diabetes and still insists on eating carbohydrates and sugars is practicing Slow Suicide. The same applies to a person with high levels of "bad" cholesterol, and keep on eating fried and fatty foods. Smokers, drug users, alcoholics, all of them practice Slow Suicide, as they shorten their life span by not giving the necessary importance to their health. Slow Suicide is also practiced by people with emotional / psychological problems, which do not give adequate attention to their condition, for example it is not healthy to have panic attacks. Panic Disorder causes shivering, sweating, decreased of the visual field, palpitations and other physical symptoms that over a period of time will wears down the person's body. In a broad way, we can say that Slow Suicide is practiced by all that in spite of knowing they have health problems or unhealthy lifestyles, they continue to act in a childish way, ignoring their problem, be it physical or mental, as if it did not exist, and by so doing, they shorten their years of life on this planet. If Slow Suicide is a new concept for you, please stop and think about it. Check whether you or a loved one is practicing Slow Suicide; if the answer is ‘yes’, do something to reverse this situation. Actually, we all know what is right and what is wrong, and what is the best path to follow. Committing Slow suicide ... are you sure that is what you want for yourself? 6 de junho de 2011 Todos já ouvimos o ditado popular que diz: “De Boas Intenções o Inferno Está Cheio”, mas na verdade o que significa? Em primeiro lugar devo dizer que não sei se o inferno está cheio ou não, porque não fui lá conferir (ainda bem!!). O que este dito popular quer dizer é que não basta somente termos boas intenções. Muito além de boas intenções precisamos agir, precisamos fazer algo na prática, executar algo concreto e acima de tudo, precisamos ter certeza de que realmente sabemos o que estamos fazendo e sobretudo, precisamos fazer o melhor que pudermos. Se você é apenas bem intencionado, não estará acrescentando muita coisa a este mundo, pelo contrário, pode estar inclusive piorando uma situação já difícil e complicada. Não basta que tenhamos somente boas intenções, precisamos fazer a coisa certa! Imagine que você quer ajudar seu filho em um trabalho escolar, mas não compreende completamente o assunto em questão. Apesar de ter boas intenções você pode passar uma informação equivocada e pode acabar por confundi-lo e atrapalhar em sua nota. Uma esposa pode querer secretariar o marido e ajuda-lo a escrever relatórios aos clientes, mas no entanto não tendo domínio sobre o computador, acaba por apertar algum comando que o faz perder todo seu trabalho. A esposa zelosa estava com boas intenções mas acabou prejudicando ao invés de ajudar. Eu posso ter a boa intenção de curar alguém ao receitar-lhe um remédio, porém não sendo médica, posso acabar por prejudicar e piorar ainda mais o estado de saúde da pessoa que na verdade eu tinha a boa intenção de ajudar. Estes são exemplos simples do porque “de boas intenções o inferno está Cheio”. Não basta ter boas intenções é imprescindivel saber fazer a coisa certa! 2 de junho de 2011 Mourning is a natural reaction and a slow and painful process of adjustment following a significant loss we suffered in life. Mourning is felt not only when a loved one dies. Having a toy that you loved, lost, stolen or destroyed, at 6 years of age, or a dog killed by a car, are also forms of mourning. Doesn’t matter if it is an inanimate object or an animal, what matters is that they had great meaning in that person's life and it was a loss, and as such, capable of the same feelings of pain and anguish of mourning. Other significant losses that may also generate pain and anguish as in a mourning are: Disability after a casualty, loss of any part of the body (hand, leg, foot, arm), a mastectomy, loss of vision, a rape and even loss of a job. For many people, other forms of losses may include, losing hope in a goal that the person had set for himself, shifting to a new house, city, state, country or suffering an economic ruin. These events are considered losses that generate anguish feelings as in mourning. Romantic Mourning is accompanied by such an intense pain that it seems it will never go away and we cannot live without the company of the lost loved one. Besides being extremely painful, romantic mourning creates great emotional and physical distress. The breakup of a close loving relationship, whether by divorce, separation or abandonment, carries a feeling of such deep sorrow, that according to psychologists, this is the second most important type of painful and tragic loss that a person can endures. Allow yourself to feel the pain, anguish, guilt and fear, so that these feelings can be naturally digested and by doing so, allow the wound to heal within the normal range and this way will not turn into obsession.
<urn:uuid:8b98cde9-847e-46e7-8f76-eb3a03b693b0>
CC-MAIN-2016-26
http://indiagestao.blogspot.com/2011_06_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.818943
1,873
2.671875
3
by Burke Speaker | September 25, 2013 9:27 am More and more research on weight loss and health is showing that longer sleep can actually help with shedding those extra pounds. “There are over two dozen studies that suggest that people who sleep less tend to weigh more,” Sanjay Patel, assistant professor of medicine at Case Western Reserve University, in Cleveland, told WebMD. In one, researchers at the University of Chicago examined 1,000 people, and discovered that overweight people slept on average about 16 minutes less a day than those at a normal weight. Patel’s own research more than followed 68,000 women for 16 years and found that the subjects who slept five hours or less a night were nearly one-third more likely to gain 30 pounds or more than those who got at least seven hours. A number of reasons for this is suggested: Sleep deprivation may affect appetite, leading to increased hunger and snacking during the day; sleep-deprived people are at risk for increased fatigue, and are less prone to exercise; and the lack of restful sleep can change basal metabolic rate, reducing the calories burnt during even ordinary activities, such as breathing and maintaining body temperature. Source URL: http://investorplace.com/2013/09/health-studies-showing-longer-sleep-can-lead-to-weight-loss/ Short URL: http://invstplc.com/1fvypOD Copyright ©2016 InvestorPlace Media, LLC. All rights reserved. 700 Indian Springs Drive, Lancaster, PA 17601.
<urn:uuid:102dde66-ecdc-4496-8272-2a559e9a8fb7>
CC-MAIN-2016-26
http://investorplace.com/2013/09/health-studies-showing-longer-sleep-can-lead-to-weight-loss/print
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953943
328
3.09375
3
If you’ve ever taken a drawing class, you are probably familiar with the idea of drawing what you see with your eyes, not what your mind thinks you should see. In other words, really look at what you are drawing and respond to that, rather than some mental notion of what the object is. We could all draw a coffee mug, for example, from an image in our mind. It is a different thing to really look at a particular coffee mug and trace its outlines. Drawing from imagination or memory and drawing from seeing are two different things, and practicing the latter will enhance your skills at the former. In order to draw from seeing, we have to let go of any desire we you might have for the drawing to look like something. Paradoxically, that desire – that attachment to our idea of what something should look like – only gets in the way of seeing. Here is an exercise that will help you Let Go of Looking Good: You will need: Paper that will accept watercolor, several sheets A white crayon Watercolor paint, any color Cheap drawing paper or a sketchbook Pen of your choice 1. First establish that this exercise is for your eyes only, unless you decide to share it. 2. Remember to approach the exercise with a sense of inquiry, not with a particular goal. It does not matter what the result looks like; what matters is that you approach it honestly and wholeheartedly. 3. Set up your mirror so you can see your face. 4. Slowly and carefully, draw your face with white crayon on the watercolor paper without looking at the paper. Look at your face in the mirror only. Take your time. 5. Brush water over the drawing, then paint over it in watercolor to reveal the drawing. Put it aside to dry, and repeat the process. 6. After a few blind self-portrait drawings, try a few with pen on the drawing paper or in your sketchbook, both blind and looking at your paper. Even when you are not doing a blind drawing, look more at the subject (your face in the mirror) than at the paper. Making self portraits, for me, is one of the most effective ways of practicing letting go of looking good. This letting go not only helps you to see better, but it cultivates acceptance. Sometimes it is hard to accept how age affects how we look. We’re tempted to leave out that wrinkle or make our eyes look bigger and more youthful. Resisting this temptation not only encourages acceptance, but will result in much more beautiful and interesting drawings.
<urn:uuid:ffb355f7-804c-41c3-9cfc-5eda8d74cccd>
CC-MAIN-2016-26
http://janedavies-collagejourneys.blogspot.com/2011/03/letting-go-of-looking-good.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951391
540
2.5625
3
The issue of the centring of the Black resistance struggle, through the Black Lives Matter movement around Black America has raised a heated debate on twitter. The debate is that Black Lives Matter effectively focuses on Black (hetero-male) lives, which erases not only the lives of Black women, LBGTQ and so forth, in America, but also the Black diaspora; in the UK, France,(rest of Europe) etc, Brazil (South America), and so forth. This debate began as a result of a discussion around police brutality, and the statistics of deaths in police custody (I’m not here to clear up the who said whats, I want to discuss Black solidarity). This centring of Black America in the Black liberation movement is not new. What have to do is to understand why Black America, historically, has been centred, and no is not through the wishes or desires of Black Americans themselves. The centring of the Black Americans happens mainly as a result of two factors. The first reason is globalisation and U.S. cultural imperialism. Black America is a sub-group of the American society and has had an overwhelming cultural influence in the U.S; in music – Hip Hop, Rock and Roll, Blues etc – literature, drama, theatre, movies and television, and sports, which has had a global impact due to American hegemonic imperialism, and the fact that the United States were able to make so much money from Black America. The second reason is distraction. Here in the UK, and in a lot other western countries; particularly those in Europe, we mainly learn about the Black civil rights struggle in our education system, as opposed to the domestic struggle, which has a two-fold effect; primarily it provides a single narrative and disconnects it from the rest of the Black diaspora. For example, they’ll teach you about Martin Luther King, and the Montgomery Bus Boycotts (which we should all deservedly know), but they won’t teach about how King went to Ghana for the independence and inauguration of Kwame Nkrumah as the first President of Ghana, in order to organise politically (which we should all deservedly know too). The other effect is that often case America is used as a comparison to highlight, seemingly, how much worse Black people are treated in America, because their oppression was legislated and more tangible; Jim Crow, police brutality, etc, and how we should not complain if we are Black in the UK, France, or any other diaspora outside of America. Another point to make is how sometimes we, the Black diaspora outside of America, internalise this centring of Black America, and re-perpetuate American exceptionalism, through no demand of their own. We too often use AAVE/Black American slang, we used to rap in American accents, and even dress similarly. Much of this is due to global representation of a singular Blackness – those of Black America – and how it has impacted us both on a positive and negative. The question of Black American solidarity with the rest of the Black diaspora is one that is very quickly and easily answered. There are countless examples where historically Black America has shown solidarity with the global Black diaspora and the continent of Africa, and this relationship has been reciprocal. Here are some examples: Malcolm X: In 1964 Malcolm X visited Nigeria where he was initiated in the Yoruba tradition, and given the named “Omowale” meaning “the child (who) has come home”. In a letter written whilst he was in Nigeria, X says “here in Africa, the 22 million American blacks are looked upon as the long-lost brothers of Africa. Our people here are interested in every aspect of our plight, and they study our struggle for freedom from every angle. Despite Western propaganda to the contrary, our African brothers and sisters love us, and are happy to learn that we also are awakening from our long "sleep" and are developing strong love for them”. Furthermore, in another speech to the youth in Mississipi youth in December of 1964 (when Malcolm returned from his African Sankofa journey) he made the following statement: “in my opinion, the greatest accomplishment that was made in the struggle of the black (wo)man in America in 1964 toward some kind of real progress was the successful linking together of our problem with the African problem, or making our problem a world problem”. Malcolm X also came to Smethwick, UK (not London!), to organise and fight against the racism and oppression experience by Black people here. Namdi Azikwe and Kwame Nkrumah, the first Prime Minister of Nigeria and first President of Ghana, respectively, both studied at Lincoln University, which is a Historically Black College/University (HBCU). W.E.B Du Bois, Phd, Writer Educator, Scholar, founder of the NAACP, spent the remaining years of his life in Ghana, working as a special advisor to Kwame Nkrumah. Du Bois also organise the first Pan-African Congress in London, 1919. Kwame Toure, formerly known as Stokely Carmichaeal, originally from Trinidad in the Caribbean, was a revolutionary civil rights (Black Panther Party) and pan-African activist (AAPRP – All Afrikan Peoples Revolutionary Party) married South African singer Miriam Makeba in 1968 and relocated to Guinea where he became an aide to Guinean President Ahmed Sekou Toure. In 1935, a number of African Americans volunteered to fight in the Ethiopian resistance war against Italian colonialism, many of whom would later re-settle there or eventually pass away. And to finish with another poignant example, a persona favourite of mine: Jean Grae’s (African American rapper/MC) verse in Black Girl Pain – “This is for Beatrice Bertha Benjamin who gave birth to Tsidi Azeeda For Lavender Hill, for Khayelitsha Athlone Mitchell's Plain, Swazi girls I'm repping for thee Manenberg, Gugulethu; where you'd just be blessed to get through For beauty shining through like the sun at the highest noon From the top of the cable car at Table Mountain; I am you Girls with the skyest blue of eyes and the darkest skin For Cape Coloured for realizing we're African For all my cousins back home, the strength of Mommy's backbone The length of which she went for raising, sacrificing her own The pain of not reflecting the range of our complexions For rubber pellet scars on Auntie Elna's back, I march Fist raised, caramel shining, in all our glory For Mauritius, St. Helena; my blood is a million stories Winnie for Joan and for Eadie, for Norma, Leslie, Ndidi For Auntie Betty, for Melanie; all the same family Fiona, Jo Burg, complex of mixed girls For surviving through every lie they put into us now This worlds yours', and I swear I will stand focused Black girls, raise up your hands; the world should clap for us”. These are examples that easily accessible and researched, which shows us the strength of solidarity. And that historically, Black America has not centred itself in the struggle by choice, but when we have been most successful is when we have been connected and unified. Also, it is important to critique each other, as well as ourselves. I strongly believe we must self-critique if we are to move forward. Hence, if we are to question Black America and the centring of the struggle – via BLM – to domestic Black American issues, we have to also ask ourselves in the remaining Black diaspora, how much we centre ourselves and place our own issues above that of our brothers and sisters on the African continent. We protest about Black deaths in the U.S., and in the U.K, but what about Brazil, where the biggest Black diaspora is found, approximately 120 million, and where on average, a Black person is killed by police every 7 hours? Or Congo, where earlier this year, over 400 bodies were found in a mass grave outside Kinshasa believed to be the bodies of protesters who went missing from protests earlier in the year? The point I am making, and I hope that this is the message that is left with whoever reads this post, is that the same system that oppresses and allows the indiscriminate murder of Black people in police custody in the U.S., Brazil, U.K, France, and the rest of the diaspora, is the same system that oppressors our brothers and sisters on the continent. And that we have always had greater success in self-determining our existence when we have connected the issues that we face, rather than divided them. If we are to be successful once more, which is inevitable, we must critique each other, yes, but in love, and organise to connect with each other, which is so much easier to do given the facility that is provide with modern technology.
<urn:uuid:66e0a1d2-90ad-45a9-90d7-b3008bc66389>
CC-MAIN-2016-26
http://jjbolawrites.blogspot.co.uk/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959018
1,908
2.71875
3
Was he the greatest of Australian explorers? In 1862 John McDouall Stuart finally succeeded in crossing the continent from sea to sea. He took off his boots, dipped his feet in the Indian Ocean, and hoisted the Union Jack. When Stuart and his companions returned to Adelaide, they were celebrated as heroes of the age. They had navigated their way on horseback through vast expanses of country unknown to Europeans, struggling from one water source to the next. Yet the country they travelled through was already home to thousands of people, celebrated in song and story, every feature of the landscape known and named. The intruders and their strange animals were observed, their tracks closely examined. Stuart’s explorations were the catalyst for great changes, both for the new colonists and for the people who had been living on country for tens of thousands of years. “Crossing Country” is the best and most lavish Stuart exhibition ever launched. It recognises the 150th Anniversary of Stuart’s crossing of the Australian continent and draws on many items not previously seen by the public. Visit Crossing Country at the Migration Museum, 82 Kintore Ave, Adelaide. Open 10-5 weekdays and 1-5 on weekends.
<urn:uuid:cfc90d68-6e11-4b9a-80f3-7d216ce3a87a>
CC-MAIN-2016-26
http://johnmcdouallstuart.org.au/crossing-country.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980204
253
2.859375
3
Journal of Pediatric Gastroenterology & Nutrition: Paediatric Offices, The General Infirmary at Leeds, West Yorkshire, UK. Address correspondence and reprint requests to Dr John Puntis, Paediatric Offices, A Floor, Old Main Site, The General Infirmary at Leeds, Great George St, Leeds LS1 3EX, West Yorkshire, UK (e-mail: [email protected]). The author reports no conflicts of interest. Malnutrition may be defined as “a state of nutrition in which a deficiency, excess or imbalance of energy, protein, or other nutrients, including minerals and vitamins, causes measurable adverse effects on body function and clinical outcome.” In children, impairment of growth is clearly an adverse effect and is easily measurable by simple anthropometry. Because malnutrition can also be regarded as a continuum starting with a nutrient intake inadequate to meet physiological requirements, followed by metabolic and functional alterations, and ultimately by changes in body composition, classification based solely on commonly used growth criteria is not just arbitrary—it tells only part of the story. Definitions in common usage for inpatients were developed by field workers in resource-poor countries attempting to determine the effect of food availability on local populations. How valid these are when considering the effect of nutritional status on disease outcomes in hospitalised children in developed countries is at least open to question. For example, where food is scarce, children between −1 and −2 SD deviations below the mean of weight for height (equivalent to Waterlow malnutrition grade 1) (1) may well be a vulnerable group (with increased risk of death from infection), but for many hospitalised children in developed countries with acute, short-term illness, this may indicate no more than having a thin physique or a temporary weight loss that will be made good within days of recovery and discharge home. For them to be classified as “at risk” (2) begs the question, “at risk for what?” DEFINITIONS OF MALNUTRITION BASED ON GROWTH STATUS Gomez (3) described nutritional status in children admitted to hospital in Mexico City in the 1950s and experiencing malnutrition from inadequate food availability. He divided them into 3 groups according to percentage weight for age (based on Boston growth standards) and showed that the most malnourished (weight for age <60%) were more likely to die, usually from respiratory or gastrointestinal infection. Gomez, therefore, established an important link between malnutrition, poor outcome, and the confounding factor of infection. Almost 20 years later, Waterlow discussed a new classification and definition of protein-energy malnutrition (PEM) (1). This was prompted by the Eighth Joint Expert Committee on Nutrition of FAO and WHO seeking a universal definition of PEM that would allow meaningful comparison of prevalence rates in different countries. Rather than just using percentage ideal weight for age, it was suggested that height or length measurements were important for giving an indication of the duration of malnutrition. Waterlow then subdivided deficit in percentage expected weight for height (thinness, indicative of acute malnutrition) and percentage expected height for age (stunting, indicative of chronic malnutrition). Grade 1 wasting (expected weight for height 90%–80% of the reference standard) highlighted vulnerable children but was not in itself considered to be an indication for intervention at a population level (4). McLaren and Read raised the objection that not all children of the same height should have the same weight because the relation between height and weight shows some variation with age (5). They devised a nomogram for diagnosing PEM and, in a worked example of >500 poor children from Beirut, showed that compared with Waterlow, their approach would triple the number of children requiring nutritional intervention. This graphically illustrates how minor changes in classification of malnutrition can lead to a large variation in estimated prevalence. The current WHO definitions of PEM categorise between −2 and −3 SD and <−3 SD, respectively, as moderate and severe acute malnutrition (weight for height) and moderate and severe chronic malnutrition (height for age). PREVALENCE OF HOSPITAL MALNUTRITION IN DEVELOPED COUNTRIES A number of studies during the last 30 years have attempted to establish the prevalence of “malnutrition” amongst hospitalised children in Europe and America using a variety of standards based on growth/anthropometry including Waterlow and WHO. Perhaps not surprisingly, given the use of different definitions and reference standards, different answers have been obtained, with ranges for chronic malnutrition varying, for example, from 9% to 47%. These studies share, however, the common conclusion that malnutrition in hospital settings is alarmingly common and often unrecognised. The need to evaluate this claim critically can be illustrated by reference to 1 recent study (6) in which the Waterlow criteria were used; if WHO criteria for malnutrition are substituted, the prevalence of malnutrition falls from 24% to around 6%. A recent Dutch national study (7) using WHO criteria found 11% of hospital admissions to show acute and 9% chronic malnutrition. “SCREENING” FOR HOSPITAL MALNUTRITION On the basis of the argument that paediatricians miss malnutrition because of an institutional incapacity to routinely weigh, measure, and chart all of the children, simple screening tools have been suggested; why the institution should be any better at using these is unclear. Alarmingly, 1 such tool has identified as many as 62% of children as being “at risk” for malnutrition. This is surprising given a median hospital stay of 2 days for those judged “low risk” and 3 days for “moderate and high risk” groups (8), suggesting that many were not severely ill. Implementing screening for arbitrarily defined malnutrition/risk when it is unknown whether screening tools in any way predict outcome or permit effective intervention may be premature. The dangers of such an approach include not only creating an unnecessary market for nutritional support products (1 study suggests prescribing sip feeds after screening but before medical and dietetic assessment) (8) but also that screening becomes a “tick box” indicator of imagined quality while in reality serving as a substitute for sound clinical assessment of individual children. If growth monitoring and nutritional history taking were performed routinely and linked to action plans for all children admitted to the hospital, then there would be no need for an alternative form of “screening.” The precise indications for nutritional support as well as the benefits of screening tools require scientific evaluation in specific groups of patients. Accurate growth measurements and plotting are certainly required because relying on clinical impression alone is highly inaccurate (9). Meanwhile, there should be individualised nutritional assessment for all children admitted to hospital, with interventions aimed at preventing or reversing growth deficits (Table 1) (10) and specific nutritional deficiencies. 1. Waterlow JC. Classification and definition of protein-calorie malnutrition. Br Med J 1972; 3:566–569. 2. Moy RJD, Smallman S, Booth IW. Malnutrition in a UK children's hospital. J Hum Nut Dietetics 1990; 3:93–100. 3. Gomez F, Galvan RR, Frenk S, et al . Mortality in second and third degree malnutrition. J Trop Pediatr 1956; 2:77–83. 4. Waterlow JC. Some aspects of child nutrition as a public health problem. Br Med J 1974; 4:88–90. 5. McLaren DS, Read WWC. Classification of nutritional status in early childhood. Lancet 1972; 300:146–148. 6. Pawellek I, Dokoupil K, Koletzko B. Prevalence of malnutrition in paediatric hospital patients. Clin Nutr 2008; 27:72–76. 7. Joosten KF, Zwart H, Hop WC, et al . National malnutrition screening days in hospitalised children in The Netherlands. Arch Dis Child 2010; 95:141–146. 8. Hulst JM, Zwart H, Cop WC, et al . Dutch national survey to test the STRONGkids nutritional screening tool in hospitalized children. Clin Nutr 2010; 29:106–111. 9. Cross JH, Holden C, MacDonald A, et al . Clinical examination compared with anthropometry in evaluating nutritional status. Arch Dis Child 1995; 72:60–61. 10. Braegger C, Decsi T, Dias JA, et al. Practical approach to paediatric enteral nutrition: a comment by the ESPGHAN Committee on Nutrition. J Pediatr Gastroenterol Nutr
<urn:uuid:9116c239-ca3a-483f-a0cf-ecde509b2d7a>
CC-MAIN-2016-26
http://journals.lww.com/jpgn/Fulltext/2010/12003/Malnutrition_and_Growth.5.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915119
1,790
2.84375
3
The unified global efforts to mitigate the high burden of vitamin and mineral deficiency, known as hidden hunger, in populations around the world are crucial to the achievement of most of the Millennium Development Goals (MDGs). We developed indices and maps of global hidden hunger to help prioritize program assistance, and to serve as an evidence-based global advocacy tool. Two types of hidden hunger indices and maps were created based on i) national prevalence data on stunting, anemia due to iron deficiency, and low serum retinol levels among preschool-aged children in 149 countries; and ii) estimates of Disability Adjusted Life Years (DALYs) attributed to micronutrient deficiencies in 136 countries. A number of countries in sub-Saharan Africa, as well as India and Afghanistan, had an alarmingly high level of hidden hunger, with stunting, iron deficiency anemia, and vitamin A deficiency all being highly prevalent. The total DALY rates per 100,000 population, attributed to micronutrient deficiencies, were generally the highest in sub-Saharan African countries. In 36 countries, home to 90% of the world’s stunted children, deficiencies of micronutrients were responsible for 1.5-12% of the total DALYs. The pattern and magnitude of iodine deficiency did not conform to that of other micronutrients. The greatest proportions of children with iodine deficiency were in the Eastern Mediterranean (46.6%), European (44.2%), and African (40.4%) regions. The current indices and maps provide crucial data to optimize the prioritization of program assistance addressing global multiple micronutrient deficiencies. Moreover, the indices and maps serve as a useful advocacy tool in the call for increased commitments to scale up effective nutrition interventions. Citation: Muthayya S, Rah JH, Sugimoto JD, Roos FF, Kraemer K, Black RE (2013) The Global Hidden Hunger Indices and Maps: An Advocacy Tool for Action. PLoS ONE 8(6): e67860. doi:10.1371/journal.pone.0067860 Editor: Abdisalan Mohamed Noor, Kenya Medical Research Institute - Wellcome Trust Research Programme, Kenya Received: October 25, 2012; Accepted: May 28, 2013; Published: June 12, 2013 Copyright: © 2013 Muthayya et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The technical consultations were supported by Sight and Life (www.sightandlife.org). There was no funding involved in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: R.E. Black has no conflict of interest. S. Muthayya and J.D. Sugimoto worked as consultants for Sight and Life, a humanitarian nutrition think tank of DSM. DSM is a global vitamin producer and has been a global partner of the United Nations World Food Programme since 2007. J.H. Rah was employed by Sight and Life when the manuscript was drafted. K. Kraemer is employed by Sight and Life, and F. Roos has been employed as a full-time biostatistician by DSM Nutritional Products since 2007. This does not alter the authors' adherence to all the PLOS ONE policies on sharing data and materials. Globally, an estimated two billion lives are affected by a chronic deficiency of essential vitamins and minerals (micronutrients), collectively known as hidden hunger [1–4]. As the term hidden hunger indicates, the signs of undernutrition and hunger are less overtly visible in those affected by it. Nevertheless, its negative and often lifelong consequences for health, productivity, and mental development are devastating . Young children and women of reproductive age living in low-income countries are the most vulnerable. In recent years, volatile food prices and climate change have led to changes in dietary intake, with a shift away from foods which are rich in micronutrients, while retaining the consumption of low-micronutrient-containing staple foods which are relatively less expensive [6,7]. Consequently, an increasing proportion of the world’s population may be at risk of hidden hunger, with potential significant negative consequences for both global health and economic growth. Worldwide, the most widespread micronutrient deficiencies are of iron, zinc, vitamin A, iodine, and folate, but deficiencies of vitamin B12 and other B vitamins also commonly occur. In developing countries, multiple micronutrient deficiencies often occur together in the same population . These deficiencies account for approximately 7% of the global disease burden annually . The 2008 Lancet series on Maternal and Child Undernutrition reported that deficiencies of vitamin A and zinc were responsible for 0.6 million and 0.4 million child deaths respectively, and a combined 9% of global childhood Disability Adjusted Life Years (DALYs). Iron deficiency alone was associated with 115,000 maternal deaths . Iron and iodine deficiencies were related to cognitive impairment, but resulted in few child deaths. Even mild to moderate deficiencies of micronutrients lead to impaired intellectual and psychomotor development, poor physical growth, increased morbidity from infectious diseases in infants and young children, and decreased work productivity in adulthood [10–18]. Global databases on anemia and iron, vitamin A and iodine deficiency have provided useful information on the magnitude and distribution of individual deficiencies [2,–4]. However, a strong evidence base of the burden of collective micronutrient deficiency and its contributions to disease, both nationally and globally, is lacking. Such information will enable the development of appropriate interventions, which would effectively target those populations most affected by multiple micronutrient deficiencies. Interventions targeted at alleviating individual micronutrient deficiencies have achieved mixed results in their effectiveness [19–21]. A sustainable strategy that tackles co-existing deficiencies, such as home fortification with micronutrient powders for preschool-age children and staple food fortification for the general population, is therefore urgently required. Evidence has suggested that fortification with multiple micronutrients could be one of the most sustainable and cost-effective development investments . Indices and maps are useful tools for public health advocacy and planning, and can guide policy decisions. This paper describes the development of global indices and maps depicting hidden hunger, reflecting both the prevalence of multiple micronutrient deficiencies and the associated disease burden, to serve as a tool to stimulate global efforts towards scaling up nutrition interventions. By highlighting the global hidden hunger hotspots and providing a ranking index of affected countries, the maps are expected to be useful in informing strategies for unified efforts to eliminate hidden hunger. It is anticipated that these indices and maps will enable public health scientists and policy makers to prioritize program assistance for those countries most affected by hidden hunger. Materials and Methods Two separate datasets were compiled for the development of hidden hunger indices and maps: i) a database of the most up-to-date national prevalence estimates of anemia, stunting, vitamin A deficiency (VAD) in pre-school aged children, and iodine deficiency (ID) in school-aged children, for 190 countries for the years 1999-2009; and ii) data of the recent DALY estimates attributed to deficiencies of iron, zinc, vitamin A, and iodine for 192 countries. Using these datasets, hidden hunger maps and indices were created by i) combining national prevalence estimates of anemia, stunting, and VAD for preschool-age, children, together with separately added estimates of ID for school-age children; and ii) combining country-wide DALY estimates attributed to deficiencies of iron, zinc, and vitamin A for the population. Data on stunting, anemia, VAD, and ID prevalence were chosen on the basis of their contribution to hidden hunger, as well as the global availability of nationally-representative estimates. Deficiencies of folate and vitamin B12 were excluded from the dataset, due to the limited availability of national data. In the absence of national data on iron deficiency or iron deficiency anemia (IDA), prevalence estimates for anemia were used, recognizing that anemia could reflect both nutritional deficiencies and non-nutritional factors, such as infections, inflammation, and thalassemia or hemoglobinopathy. In this analysis, it was assumed that 60% of anemia was due to iron deficiency in non-malaria settings and 50% in malaria endemic areas [23,24]. Stunting prevalence was used as a proxy of zinc deficiency, as recommended by the International Zinc Nutrition Consultative Group . Estimates of anemia prevalence were obtained from two main sources: i) the World Health Organization (WHO) Global Database on Anemia , a part of the Vitamin and Mineral Nutrition Information System (VMNIS); and ii) Demographic and Health Surveys (DHS). Only nationally representative prevalence data for preschool-age children (0-4.99 years) were included, as this was the most vulnerable age group and anemia prevalence in children had strong correlations with the corresponding anemia rates in pregnant women (r=0.83), and in women of reproductive age (r=0.82). Wherever possible, data on children below 0.5 years were excluded, since the cut-off for anemia is not defined in this age group. As an exception, the national rural data for Bangladesh and data aggregated from several state surveys for Brazil were included as national estimates. For countries without national survey data, regression-based estimates developed by the WHO were used in our analyses . Data on VAD were obtained from the WHO Global Database on VAD, part of the VMNIS . Only national prevalences of low serum (or plasma) retinol concentration, using a cut-off of <0.70 µmol/L, were used. Prevalence estimates for preschool-age children were used because national survey data for other population groups, such as pregnant women, were limited. For countries lacking national survey data, regression-based estimates developed by the WHO were used in our analyses . All countries (n=37) with a 2005 gross domestic product (GDP) ≥US$ 15,000 were assumed to be free from VAD, and therefore did not have serum retinol data. Data on ID were extracted from the WHO Global Database on ID, part of the VMNIS, and from the Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS) . Additional new national data available since 2007 for 50 countries were also included . The prevalence of ID was defined as the proportion of school-age children having a urinary iodine (UI) concentration <100 µg/L. For countries where only the median UI was reported, regression estimates of ID prevalence derived from UI concentration studies compiled in the WHO VMNIS database were used. However, exceptions were made for countries with pooled data from multiple surveys (Italy, Spain, and the Russian Federation), with national data on ID restricted to the urban population (New Zealand, Sudan), and with national data for population groups other than only school age (the Czech Republic, France, Kazakhstan, Oman, Slovenia, Tajikistan, Ukraine, and the UK) [3,26]. For countries without national survey data, ID prevalence estimates were left missing. Nationally representative data on moderate and severe stunting among preschool-age children were extracted from the WHO database on child growth and malnutrition, the UNICEF global database on child growth, and the DHS and MICS surveys. Stunting was defined as height-for-age z-scores below -2 of the new WHO growth reference standards . For the few countries with missing values (n=11), the mean stunting prevalence of countries in the same WHO region, weighted by the population size for 2009, was assigned as the best estimate. For the DALY dataset, the most recent DALY estimates attributed to deficiencies of iron, zinc, and vitamin A in 136 countries were compiled. DALYs are the sum of Years of Life Lost (YLLs) and Years Lived with Disability (YLDs) for incident conditions. For the calculation of these DALYs, expert working groups conducted comprehensive reviews of data on risk-factor exposure and hazard for 14 epidemiological sub-regions of the world, by age and sex. Data reflected the current status of mortality, the prevalence of micronutrient deficiency, and existing micronutrient programs at the date of the calculations. The calculations also adjusted the estimates in order not to double-count. The contribution of a risk factor to disease or mortality was expressed as the fraction of disease or death attributable to the risk factor in a population, and was referred to as the population-attributable fraction (PAF). When estimating the total effects of individual distal factors on disease, both mediated and direct effects were considered because, in the presence of mediated effects, controlling for the intermediate factor would attenuate the effects of the more distal factors. When estimating the joint effects of the more distal factors and the intermediate factors, the mediated and direct effects were separated. Further details of the estimation of DALYs attributed to micronutrient deficiencies are described elsewhere [8,24,28]. In addition to micronutrient deficiency variables, national data on important proximate determinants of hidden hunger were compiled. These include the percentage of the population with inadequate dietary energy intakes estimated by the Food and Agriculture Organization (FAO), and the Human Development Index (HDI) and Multidimensional Poverty Index (MPI) of the United Nations Development Programme (UNDP) [29,30]. The HDI provides a composite measure of three basic dimensions of human development: a long and healthy life, education, and the standard of living. The MPI identifies overlapping deprivations at a household level across the same three dimensions as the HDI, and shows the average number of poor people and deprivations with which poor households contend. Countries were grouped by WHO region. We defined the Hidden Hunger Index (HHI-PD) for preschool-age children as the average of three deficiency prevalence estimates: preschool children affected by stunting, anemia due to iron deficiency, and VAD. The three components were equally weighted (HHI-PD score = [stunting (%) + anemia (%) + low serum retinol (%)]/3). The iodine deficiency estimates for school-age children were not included in the HHI estimation due to its weak correlations with other micronutrient deficiencies (r=0.01-0.18). The HHI-PD score ranged between the best and worst possible scores of 0 and 100, respectively. Applying arbitrary cut-offs, HHI-PD scores between 0 and 19.9 were considered mild, 20-34.9 as moderate, 35-44.9 as severe, and 45-100 as alarmingly high. Highly developed countries with a 2007 Human Development Index (HDI) score above 0.9 (n=41) were assumed to have a low prevalence of micronutrient deficiencies, and were therefore excluded from this analysis. The 2007 Life Expectancy Index was used as a substitute for countries with missing HDI scores. In addition, a few countries with neither VAD nor anemia prevalence data nor regression estimates were excluded. The Hidden Hunger Indices which reflected the global disease burden were computed in two different ways, as i) the total hidden-hunger-associated DALYs per 100,000 population (HHI-DBa); and ii) the total hidden-hunger-associated DALYs per country (HHI-DBu). The DALYs attributed to ID were not included in the HHI estimation due to incomplete data for several countries, its weak correlations with other micronutrient deficiencies, and its relatively small contribution to the total DALY-based HHI. The associations between HHI-PD and both HHI-DBa and indicators of human development and inadequate dietary energy intakes were examined using the Spearman rank correlation coefficient. A total of 149 countries with a 2007 HDI value <0.9 were included in the HHI-PD estimation (Appendix S1). A large proportion of the 41 countries excluded (HDI≥0.9) were located in Europe, and had a national prevalence of stunting and VAD missing or, when available, had prevalence estimates of <10% for stunting and anemia due to iron deficiency (Appendix S2). The country with the highest HHI-PD score for preschool-age children was Niger and the lowest was Hungary (Figure 1). Of the 20 countries with the highest HHI-PD scores, 18 were in sub-Saharan Africa and two, India and Afghanistan, were in Asia. The majority of these countries had child stunting, anemia due to iron deficiency, and VAD rates among preschool-age children of over 40%, 30%, and 50%, respectively (Figure 1). The majority of the countries with low HHI-PD scores had child stunting and VAD prevalence of less than 10%. The hidden hunger index (HHI-PD) was estimated based on national estimates of the prevalence of stunting, anemia due to iron deficiency, and low serum retinol concentration. Globally, there were hot spots of hidden hunger, with the prevalence being alarmingly high in sub-Saharan Africa, and severe in many countries in South-Central/South-East Asia (Figure 2). Most South American countries only had a mild-to-moderate degree of hidden hunger. In many countries, ID prevalence in school-age children did not conform to the magnitude of hidden hunger. For instance, the Democratic Republic of the Congo and Liberia, both with an alarmingly high degree of hidden hunger, had low ID prevalences of 1.5% and 3.5%, respectively. In addition, in Latvia, the Russian Federation, Estonia, and Malaysia, all of which exhibited a mild degree of hidden hunger, ID prevalence was as high as 76.8, 58.6, 67.0 and 48.2%, respectively. The hidden hunger index HHI-PD was estimated based on national estimates of the prevalence of stunting, anemia due to iron deficiency, and low serum retinol concentration. A strong inverse association was noted between the HHI-PD and 2007 HDI values (Figure 3). There was a moderate positive association between HHI-PD and the proportion of the population with inadequate dietary energy (Figure 4). In many countries, the HHI-PD score was high, but the percent population with inadequate dietary energy was low. The alphabet characters symbolize the first initial of each country; the font sizes are proportional to the population size. Font color represents different regions, such that black represents Africa; red, East Asia; yellow, West Central Asia; green, Central and South America; turquoise, the Pacific Islands, and the Caribbean; purple, South Asia; and blue, Europe, North America, Australia, and New Zealand. The alphabet characters symbolize the first initial of each country; the font sizes are proportional to the population size. Font color represents different regions, such that black represents Africa; red, East Asia; yellow, West Central Asia; green, Central and South America; turquoise, the Pacific Islands, and the Caribbean; purple, South Asia; and blue, Europe, North America, Australia, and New Zealand. The DALY-based hidden hunger indices (HHI-DBa and HHI-DBu) were also only calculated for the 136 countries with a 2007 HDI value <0.9 and available DALY estimates (Figure 5 Appendix S3). The majority of the countries with high HHI-DBa scores were in sub-Saharan Africa (Figure 5). The disease burden was highest in Sierra Leone, with an estimated total of 5,870 DALYs per 100,000 population, and lowest in Cuba, with 15 DALYs per 100,000 population. Of the top 20 countries, 18 were in sub-Saharan Africa while two, Afghanistan and India, were in Asia. Thirteen of the 20 countries with the highest DALY rates (HHI-DBa) were also among the countries with the highest HHI-PD scores. A Spearman rank correlation of 0.89 was observed between HHI-PD and HHI-DBa (Figure 6). The hidden hunger index HHI-DBa was estimated based on estimates of the DALYs per 100,000 population, attributable to iron, vitamin A, and zinc deficiencies. Prevalence-based HHI-PD estimates were not available for three of the 136 countries with HHI-DBa estimates (the Bahamas, Bahrain, and Somalia). Among the top 36 countries with 20% or greater prevalence of childhood stunting, and home to 90% of all stunted children globally, the percent total DALYs attributed to micronutrient deficiencies ranged from 1.5% in South Africa to 12.3% in Côte d’Ivoire, with deficiencies of vitamin A and zinc accountable for the largest proportion (data not shown). Conversely, the HHI-DBu scores were high in South Asian countries, such as India, Bangladesh, and Pakistan (Figure 7). The maps and indices described in this paper provide much-needed information on the collective magnitude and distribution of multiple micronutrient deficiencies across the globe, and their attributed disease burden, for potential use in advocacy and planning efforts to guide health and nutrition policies. Notably, a number of countries in sub-Saharan Africa, as well as India and Afghanistan, had an alarmingly high level of hidden hunger, with stunting, IDA, and VAD all being highly prevalent amongst preschool-age children. Countries in sub-Saharan Africa, such as Sierra Leone and Niger, exhibited the highest levels of population-adjusted disease burden attributed to micronutrient deficiencies. In the 36 high-burden countries, deficiencies of micronutrients, especially vitamin A and zinc, were responsible for 2-12% of the total DALYs. In contrast, due to the large population size of South Asia, the population-unadjusted total DALYs attributed to hidden hunger were greatest in India, Bangladesh, and Pakistan. The global hidden hunger indices and maps capture the collective burden of micronutrient deficiencies and their contribution to the disease burden. Earlier indices such as the Global Hunger Index (GHI), reflecting measures of food security, undernutrition, and child mortality, capture the multidimensional aspects and consequences of hunger, caused mainly by food and caloric deficit, and do not take into account the burden and consequences of pervasive hidden hunger [31,32]. Maps depicting single micronutrient deficiencies have served to inform policy makers and the scientific community of the extent of individual vitamin and mineral deficiencies, but do not illustrate the more commonly observed multiple micronutrient deficiencies. This offers the public health community and policy makers a novel opportunity to develop a unified and comprehensive approach to targeting the alleviation of multiple micronutrient deficiencies in high burden countries, which is essential to achieving most of the MDGs, and is at the core of the Scaling Up Nutrition (SUN) Movement’s Road Map, which focuses on implementing evidence-based nutrition interventions and integrating nutrition goals across sectors, including health, social protection, poverty alleviation, national development, and agriculture . Deficiencies of various micronutrients often share a common etiology (for example, low consumption of food from animal, fruit, and vegetable origin, or losses due to frequent infections), thereby tending to co-occur and correlate in the same population. In this analysis, there were moderate to high correlations between the prevalence of stunting, anemia, and low serum retinol among preschool children at a country level (data not shown). Similar trends were observed when the prevalence estimates were only based on national survey data, excluding regression-based estimates (data not shown). This suggests that the associations were robust, and not an artifact of the regression models used by the WHO to estimate the prevalence of VAD and anemia for countries with missing national survey data. Iodine deficiency estimates were an exception, in that they did not correlate with the prevalence of stunting, anemia, and VAD. Consequently, the HHI-PD tabulated for the 149 countries represented the combined prevalence of stunting, anemia due to iron deficiency, and VAD. Estimates of ID were not incorporated but presented separately, due to the lack of correlation with deficiencies of other micronutrients. The greatest proportions of children with ID were in the Eastern Mediterranean (46.6%), European (44.2%), and African (40.4%) regions. However, the high prevalence of ID in populations must be viewed with caution, due to the high intra-individual variability in urinary iodine in both spot and 24-hour urine collections in populations with adequate iodine intake. The varying coverage of salt iodization, combined with the fact that sources of iodine are different from other micronutrients, sets it apart from other deficiencies that cluster together. However, there could be exceptions among these countries, particularly with regard to ID, where salt iodization is no longer mandatory, and the prevalence of low UI concentration is high. Further, other micronutrient deficiencies, such as folate and vitamin B12, not addressed in this paper may be prevalent in these countries. Countries in sub-Saharan Africa, the only developing region where the numbers of malnourished children have been rising in recent years, exhibited the highest rates of hidden hunger . Low quality diets, as well as frequent infections, are likely to be the key causal factors, further compounded by poor economic conditions and repressive political systems. Asia, India, and Afghanistan exhibited the most severe magnitude of multiple micronutrient deficiencies. India is host to the largest number of undernourished children in the world . It is widely believed that India’s limited success in dealing with undernutrition is linked to poor governance, including the lack of a strong national agenda against malnutrition within the highest executive offices; a lack of consistent monitoring of the situation based on reliable data; and an inability to comprehend malnutrition as a holistic issue, which is affected by the quality of interventions across a number of sectors, including water and sanitation, education, agriculture, and others. Instead, malnutrition is viewed primarily as a problem of hunger and food distribution, which can be dealt with through supplementary feeding and subsidized distribution systems . There was a strong inverse correlation between HHI-PD and HDI, regardless of the use of only national estimates or both national and regression estimates in the calculation of HHI-PD (data not shown). As expected, countries with high HDI tended to have low HHI-PD, and vice versa. This highlights the importance of addressing hidden hunger in order to reduce general deprivation, improve health and education, and vice versa. Conversely, the HHI-PD was only weakly associated with the measure of undernourishment which reflects the proportion of populations with an inadequate energy intake. This indicates that the HHI-PD measures a form of hunger associated less with energy deficiency, and more with a lack of essential micronutrients. The current indices and maps are therefore particularly helpful while planning program assistance for populations which suffer from deficiencies of one or more micronutrients. The DALY-based indices and maps were intended to capture the consequences of micronutrient deficiencies globally. The population-adjusted DALY rates attributed to micronutrient deficiencies were largest in sub-Saharan African countries, as was observed in hidden hunger prevalence estimates. The DALYs attributed to deficiencies of iron, vitamin A, and zinc were strongly correlated with one another (data not shown). Overall, the hidden hunger indices based on prevalence estimates and DALYs were strongly correlated (r=0.9), implying, as expected, that the disease burden due to hidden hunger tended to be greater in countries where micronutrient deficiencies were prevalent. By contrast, the population-unadjusted total disease burdens attributable to hidden hunger were greatest in countries with large populations in South Asia. A few limitations of the indices and maps need to be considered. The indices were not comprehensive in their representation of global hidden hunger due to the limited availability of national data pertaining to key micronutrients. In the absence of biochemical indicators of zinc deficiency and the validation of the adequacy of zinc in national food supplies, national stunting prevalence was used to reflect population zinc status, recognizing that this does not reflect the true prevalence of zinc deficiency, and that multiple factors besides zinc deficiency could lead to impaired linear growth. Moreover, to arrive at national estimates of IDA, an assumption of 50-60% of anemia attributable to iron deficiency was made. These assumptions need further validation, however. The HHI-PD estimates were based on prevalence data for preschool-age children, without taking into account other important and recognized vulnerable groups, such as pregnant women. Overall, due to poor data coverage of micronutrient deficiency variables, the HHI-PD could only be estimated for approximately 60% of the countries. The prevalence estimates derived from regression methods for countries lacking national data were, at best, an approximation, which may not accurately reflect the true burden of hidden hunger. In addition, estimates of the correlation between HHI-PD and HHI-DBa must be considered with respect to the fact that there is an overlap in the data on anemia, stunting, and serum retinol used to calculate both indices. In conclusion, more high-quality national data on the deficiency status of other key vitamins and minerals and a better estimation of zinc and iron deficiency are warranted to improve the measure of global hidden hunger. The regression estimates for anemia and VAD may need to be updated, in order to take into account more recent country prevalence of deficiencies and covariates measuring health status and development indicators. Despite these further needs, the current indices and maps capturing the burden and consequences of hidden hunger provide crucial evidence for the appropriate targeting and prioritizing of comprehensive and inclusive program assistance aiming to tackle global multiple micronutrient deficiencies. Moreover, the indices and maps are believed to serve as useful tools to call for urgent and unified efforts to stimulate relevant global advocacy efforts towards the continued scaling up of nutrition interventions. Global hidden hunger indices and maps as an advocacy tool The current and growing support for the Scaling up Nutrition (SUN) Movement illustrates the unprecedented global political will to prioritize food and nutrition security as being central to the development and achievement of the MDGs. The main investors in SUN are national governments themselves. Governments require tools which enable them to make informed policy and budget decisions. The current hidden hunger indices and maps provide advocates with a tool to further empower decision makers to better understand and visualize the importance of prioritizing interventions that address hidden hunger. This is critical, if political will is to be transformed into effective and scaled-up nutrition direct and nutrition-sensitive interventions. Globally, an estimated two billion people are affected by deficiencies of essential vitamins and minerals, collectively known as hidden hunger, which negatively impact on health and economic development. The hidden hunger indices and maps illustrate both the burden of multiple micronutrient deficiencies and their contribution to the disease burden. They also provide a useful tool for advocates to illustrate the real need for multiple micronutrient interventions to address hidden hunger. In addition, they provide useful information for policy makers in decision making and prioritizing interventions, and offer valuable information for public health scientists as a basis for action, and the subsequent monitoring and evaluation of preventive programs. Of the 20 countries with the highest HHI-PD scores, 18 were in sub-Saharan Africa and two were in Asia. Stunting, iron deficiency anemia, and vitamin A deficiency were highly prevalent amongst preschool children in countries with the highest HHI-PD. The hidden hunger indices provide evidence for the appropriate targeting and prioritizing of comprehensive and inclusive nutrition programs which address global hidden hunger. The HHI-PD and maps also provide valuable information on hot spots where the prevalence of hidden hunger is alarmingly high, and where focused and scaled-up interventions are most critical to the attainment of the MDGs. Appendix S1. Hidden Hunger Index (HHI) scores by country and region. Appendix S2. Prevalence of micronutrient deficiencies among preschool-aged children and school-aged children (for low urinary iodine) in 41 countries with a 2007 Human Development Index (HDI) value >0.9 and excluded from estimation of the hidden hunger indices. Appendix S3. Population adjusted DALY estimates by country and region. The authors wish to acknowledge Laurence Curty and Andrew Thompson, who were involved in the data collection process. The authors are indebted to Jane Badham for her skillful editorial support. We also wish to thank all of the children who participated in this project. The authors wish to thank the participants of the first technical consultation held in October 2009 in Bangkok, Thailand: Martin Bloem, Saskia de Pee (World Food Programme, Italy); Keith West, Parul Christian, Kerry Schulze, Alain Labrique (Johns Hopkins Bloomberg School of Public Health, USA); Kenneth Brown (UC Davis, USA); Akoto Osei (Helen Keller International, Cambodia); Victoria Quinn (Helen Keller International, USA); Lynnette Neufeld (Micronutrient Initiative, Canada); Pieter Jooste (MRC, South Africa); Sean Lynch (Eastern Virginia Medical School, USA); Regina Moench-Pfanner, Arnaud Laillou (GAIN, Switzerland); Meera Shekar (World Bank, USA); and the second technical consultation held in August 2010 in Washington, DC, USA: Rafael Flores Ayala (Center for Disease Control, USA), Sean Lynch (Eastern Virginia Medical School, USA), Harold Alderman (World Bank, USA), Keith West, Parul Christian, Alain Labrique (Johns Hopkins Bloomberg School of Public Health, USA), Kenneth Brown (UC Davis, USA), Emorn Wasantwisut (Mahidol University, Thailand), and Omar Dary (Academy for Educational Development, USA). Colin Mathers at WHO assisted with the estimation of DALYs. Conceived and designed the experiments: SM JHR KK. Analyzed the data: JS FR. Contributed reagents/materials/analysis tools: REB. Wrote the manuscript: SM JHR SM FR KK REB. - 1. WHO (2002)World Health Report 2002: Reducing risks, promoting healthy life: Overview. Geneva: World Health Organization. - 2. WHO (2008) Worldwide prevalence of anaemia 1993-2005. WHO Global Database on Anaemia. Geneva: World Health Organization. - 3. WHO (2004) Iodine status worldwide. WHO Global Database on Iodine Deficiency. Geneva: World Health Organization. - 4. WHO (2009) Global prevalence of vitamin A deficiency in populations at risk 1995–2005. WHO Global Database on Vitamin A Deficiency. Geneva: World Health Organization. - 5. Micronutrient Initiative (2009) Investing in the Future: A united call to action on vitamin and mineral deficiencies – Global report 2009. Ottawa: Micronutrient Initiative. - 6. Bloem MW, Semba RD, Kraemer K (2010) Castel Gandolfo Workshop: an introduction to the impact of climate change, the economic crisis, and the increase in the food prices on malnutrition. J Nutr 140: 132S–135S. doi:10.3945/jn.109.112094. PubMed: 19923395. - 7. United Nations Standing Committee on Nutrition (2008) The Impact of High Food Prices on Maternal and Child Nutrition. Geneva: UNSCN . . http://www.unscn.org/Publications/html/CFS_SCNB.pdf (accessed 2011). - 8. Allen LH, Peerson JM, Olney DK (2009) Provision of multiple rather than two or fewer micronutrients more effectively improves growth and other outcomes in micronutrient-deficient children and adults. J Nutr 139: 1022–1030. doi:10.3945/jn.107.086199. PubMed: 19321586. - 9. Ezzati M, Lopez AD, Rodgers A, Murray CJ (2004) Comparative quantification of health risks: The global and regional burden of disease attributable to selected major risk factors. Geneva: World Health Organization. - 10. Lozoff B, Jimenez E, Wolf AW (1991) Long-term developmental outcome of infants with iron deficiency. N Engl J Med 325: 687–694. doi:10.1056/NEJM199109053251004. PubMed: 1870641. - 11. Haas JD, Brownlie T (2001) Iron deficiency and reduced work capacity: a critical review of the research to determine a causal relationship. J Nutr 131: 676S–688S. PubMed: 11160598. - 12. Pollitt E (2001) The developmental and probabilistic nature of the functional consequences of iron-deficiency anemia in children. J Nutr 131: 669S–675S. PubMed: 11160597. - 13. Sommer A, West KP Jr (1996) Vitamin A deficiency: Health, survival, and vision. New York: Oxford University Press. - 14. Christian P, West KP Jr, Khatry SK, Katz J, LeClerq SC et al. (2000) Night blindness during pregnancy and subsequent mortality among women in Nepal: Effects of vitamin A and beta-carotene supplementation. Am J Epidemiol 152: 542–547. doi:10.1093/aje/152.6.542. PubMed: 10997544. - 15. Fawzi WW, Chalmers TC, Herrera MG, Mosteller F (1993) Vitamin A supplementation and child mortality. A meta-analysis. JAMA 269: 898–903. doi:10.1001/jama.1993.03500070078033. PubMed: 8426449. - 16. Zimmermann MB, Jooste PL, Pandav CS (2008) Iodine-deficiency disorders. Lancet 372: 1251–1262. doi:10.1016/S0140-6736(08)61005-3. PubMed: 18676011. - 17. Brown KH, Peerson JM, Rivera J, Allen LH (2002) Effect of supplemental zinc on the growth and serum zinc concentrations of prepubertal children: a meta-analysis of randomized controlled trials. Am J Clin Nutr 75: 1062–1071. PubMed: 12036814. - 18. Bhutta ZA, Black RE, Brown KH, Gardner JM, Gore S et al. (1999) Prevention of diarrhea and pneumonia by zinc supplementation in children in developing countries: pooled analysis of randomized controlled trials. Zinc Investigators’ Collaborative Group. J Pediatr 135: 689–697. doi:10.1016/S0022-3476(99)70086-7. PubMed: 10586170. - 19. Black RE (1998) Therapeutic and preventive effects of zinc on serious childhood infectious diseases in developing countries. Am J Clin Nutr 68 (2 Suppl): 476S-479S. PubMed: 9701163. - 20. Stoltzfus R (2001) Defining iron-deficiency anemia in public health terms: A time for reflection. J Nutr 131 (2 Suppl): 565S–567S. PubMed: 11160589. - 21. Costello AM, de L, Osrin D (2003) Micronutrient Status during Pregnancy and Outcomes for Newborn Infants in Developing Countries. J Nutr 133 (suppl): 1757S–1764S. PubMed: 12730495. - 22. Horton S, Mannar V, Wesley A (2008) Micronutrient Fortification (Iron and Salt Iodization). Best Practice Papers from Copenhagen Consensus. http://www.copenhagenconsensus.com/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fCCC%2fBPP_Fortification.pdf. ( Accessed: November 2, 2010). - 23. Rastogi R, Mathers CD (2000) Global burden of iron deficiency anaemia in the year 2000. http://www.who.int/healthinfo/statistics/bod_irondeficiencyanaemia.pdf. ( Accessed: September 15, 2010). - 24. Black RE, Allen LH, Bhutta ZA, Caulfield LE, de Onis M et al. (2008) Maternal and child undernutrition: global and regional exposures and health consequences. Lancet 371: 243–260. doi:10.1016/S0140-6736(07)61690-0. PubMed: 18207566. - 25. International Zinc Nutrition Consultative Group (2007) Quantifying the risk of zinc deficiency: Recommended indicators. I ZiNCG Tech Brief No. 1 . Davis. - 26. Andersson M, Karumbunathan V, Zimmermann MB (2012) Global iodine status in 2011 and trends over the past decade. J Nutr 142: 744-750. doi:10.3945/jn.111.149393. PubMed: 22378324. - 27. WHO (2006) WHO Child Growth Standards: Length/height-for-age, weight-for-age, weight-for-length, weight-for-height and body mass index-for-age: methods and development. Geneva: World Health Organization. - 28. WHO (2009) Global health risks: mortality and burden of disease attributable to selected major risks. Geneva: World Health Organization. - 29. FAO (2008) FAO Statistics Division. Prevalence of undernourishment in total populations in 2008 (percentage). http://www.fao.org/fileadmin/templates/ess/documents/food_security_statistics/PrevalenceUndernourishment_en.xls. ( Accessed: October 05, 2012). - 30. UNDP (2007) Human Development Reports. http://hdr.undp.org/en/statistics/data/. ( Accessed: October 05, 2012). - 31. International Food Policy Research Institute /Welthungerhilfe/Concern (2007) The challenge of hunger 2007: Global Hunger Index: facts, determinants, and trends. Washington, DC, Boon, and Dublin. - 32. Hunger Global. Index. : The Challenge of Hunger. : Focus on the Crisis of Child Undernutrition (2010) International Food Policy Research Institute, Concern Worldwide and Welthungerhilfe, Bonn, Washington DC and Dublin. - 33. Scaling Up Nutrition (SUN) Movement Strategy (2012-2015). http://scalingupnutrition.org/wp-content/uploads/2012/10/SUN-MOVEMENT-STRATEGY-ENG.pdf. ( Accessed: February 16, 2013). - 34. Food and Agriculture Organization (2002) Food insecurity: When people must live with hunger and fear starvation. The state of food insecurity in the world. Rome: Food and Agriculture Organization. - 35. Saxena NC (2008) Hunger, Under-nutrition and Food Security in India. India. http://www2.undprcc.lk/areas_of_work/pdf/Hunger_in_India_2009.pdf. ( Accessed: November 2, 2010). - 36. Policies Without Politics: Analysing Nutrition Governance in India (2012) Analysing Nutrition Governance: India Country Report. Shandana Khan Mohmand. . http://www.ids.ac.uk/files/dmfile/DFID_ANG_India_Report_Final.pdf (accessed 2013).
<urn:uuid:238e8358-2e82-4228-ab5a-4c6fdb1975d2>
CC-MAIN-2016-26
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0067860
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913699
9,204
3.328125
3
Day 4 - 5, 30 - 31 March 2011 30.03.2011 - 31.03.2011 Venice (Italian: Venezia [veˈnɛttsja] ( listen), Venetian: Venexia [veˈnɛsja]) is a city in northern Italy known both for tourism and for industry, and is the capital of the region Veneto, with a population of about 270,660 (census estimate 30 April 2009). Together with Padua, the city is included in the Padua-Venice Metropolitan Area (population 1,600,000). The name is derived from the ancient people of Veneti that inhabited the region as of 10th century B.C. The city historically was the capital of the Venetian Republic. Venice has been known as the "La Dominante", "Serenissima", "Queen of the Adriatic", "City of Water", "City of Masks", "City of Bridges", "The Floating City", and "City of Canals". The city stretches across 117 small islands in the marshy Venetian Lagoon along the Adriatic Sea in northeast Italy. The saltwater lagoon stretches along the shoreline between the mouths of the Po (south) and the Piave (north) Rivers. The Republic of Venice was a major maritime power during the Middle Ages and Renaissance, and a staging area for the Crusades and the Battle of Lepanto, as well as a very important center of commerce (especially silk, grain and spice trade) and art in the 13th century up to the end of the 17th century. This made Venice a wealthy city throughout most of its history. It is also known for its several important artistic movements, especially the Renaissance period. Venice has played an important role in the history of symphonic and operatic music, and it is the birthplace of Antonio Vivaldi. In Venice, the trip will continue to Mestre Mestre is a frazione (borough) of the comune of Venice, in Veneto, northern Italy. Located on the mainland, together with the neighbouring Marghera, Chirignago, Favaro Veneto and Zelarino it includes c. 170,000 inhabitants of the comune, the islands of Venice proper accounting for c. 60,000 and 31,000 live on the other islands of the Venetian Lagoon. The city is connected to Venice by a large rail and road bridge, called Ponte della Libertà (Freedom Bridge). Mestre is the largest city in Italy that hasn't the status of autonomous comune. Lucerne (pronounced /ˌluːˈsɜrn/; German: Luzern, [luˈtsɛrn]; French: Lucerne, [lysɛʁn]; Italian: Lucerna, [luˈtʃerna]; Romansh: Lucerna; Lucerne German: Lozärn) is a city in north-central Switzerland, in the German-speaking portion of that country. Lucerne is the capital of the Canton of Lucerne and the capital of the district with the same name. With a population of about 76,200 people, Lucerne is the most populous city in Central Switzerland, and a nexus of transportation, telecommunications, and government of this region. The city's metropolitan area consists of 17 cities and towns located in three different cantons with an overall population of about 250,000 people. Due to its location on the shore of Lake Lucerne (der Vierwaldstättersee), within sight of Mount Pilatus and Rigi in the Swiss Alps, Lucerne has long been a destination for tourists. One of the city's famous landmarks is the Chapel Bridge (Kapellbrücke), a wooden bridge first erected in the 14th century. Lucerne has been voted as the fifth most-popular tourism destination in the world in 2010 by Tripadvisor, and it has private hotels and schools mostly shores of Lake Lucerne. The 'pit stop' for day 5 is Postillon Bouchs Hotel, located above "Lake Lucerne" with unrestricted view of the lake and the mountains. There's no information about internet / wifi connection about this hotel (http://www.activehotels.com says that Wireless Internet Hotspot is available in the entire hotel and costs CHF 7.80 per 30 minutes.) Review about this hotel click here
<urn:uuid:68839a56-6de2-4e36-a28e-a44a6a8e418d>
CC-MAIN-2016-26
http://journeylism.travellerspoint.com/5/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945181
943
3.03125
3
Data Note: Americans’ Views On The U.S. Role In Global Health With a new Congressional session beginning, policymakers preparing for budget discussions, and the ongoing Ebola crisis bringing attention to international health issues, it is an important time to understand Americans’ views on global health and the role of the U.S. government in addressing global health issues. The Kaiser Family Foundation has tracked public opinion on global health in-depth since 2009. This new survey shows the public, including a plurality of Republicans, Democrats and independents, wants funding for global health maintained. Americans’ top priorities for global health funding focus on meeting basic human needs such as improving access to clean water and food, and helping children; some other high profile issues fall further down the list including malaria, polio, chronic illness and reproductive health. A large majority of the public overestimates the share of the U.S. federal budget spent on foreign aid and most say the country spends too much on it, but the public is more supportive of spending specifically aimed at improving health in developing countries. Another top priority is the fight against Ebola in West Africa, a news story that has ranked among the most closely watched developments of 2014. The number of new Ebola cases has declined recently, and 4 in 10 now believe the outbreak in West Africa is under control, up from just 10 percent in October when Ebola was spreading quickly. Views Of U.S. Spending On Foreign Aid And Global Health Most Continue to Overestimate U.S. Spending On Foreign Aid, But Views Change After Hearing The Actual Amount Spent A large majority of the public overestimates the amount of the federal budget that is spent on foreign aid. Similar to past Kaiser polls, just 1 in 20 correctly state that 1 percent or less of the federal budget is spent on foreign aid. About half say it is more than 10 percent of the budget, and, on average, Americans say that spending on foreign aid makes up roughly a quarter of the federal budget. A majority of the public says the U.S. is spending too much on foreign aid, while just about 1 in 10 say too little and a quarter say about the right amount. However, after hearing the factual statement that foreign aid makes up about 1 percent of the federal budget the share saying “too much” drops in half from 56 percent to 28 percent and the share saying “too little” rises from 11 percent to 26 percent. More Support For Global Health Spending Than For Foreign Aid When asked about U.S. spending on global health, the public is more supportive than when asked about spending on foreign aid more generally. About 6 in 10 say the U.S. is now spending too little (27 percent) or about the right amount (36 percent) on efforts to improve health for people in developing countries, and a quarter say the U.S. is spending too much (26 percent). Over time, opinions on U.S. spending on improving health in developing countries have remained fairly stable. Partisan Differences In Views On U.S. Global Health Spending Reflective of overall partisan differences in opinion on the role of government and federal spending, there are differences in views of U. S. spending on global health by political party identification. Across parties, roughly 4 in 10 say the U.S. is spending about the right amount and in no case does a majority say the U.S. is spending too much or too little. Democrats are more likely to say the country is spending too little than too much (34 percent vs. 18 percent), whereas Republicans are more likely to say the country is spending too much than too little (36 percent vs. 14 percent). Independents fall in the middle with roughly equal shares saying the U.S. spends too much (25 percent) or too little (29 percent) on health in developing countries. Most Say U.S. Global Health Spending Protects Americans’ Health and Improves U.S. Image About 7 in 10 say that spending money on improving health in developing countries helps protect the health of Americans by preventing the spread of diseases like SARS and Ebola, and nearly 6 in 10 say it helps to improve the image of the U.S. throughout the world. Over a third say spending on global health helps U.S. national security (37 percent) or the U.S. economy (36 percent). Democrats are more likely than Republicans to say that spending on global health helps in each of these areas, but still over 6 in 10 Republicans say spending helps protect Americans from disease (63 percent). |Percent who say spending money on improving health in developing countries helps…||Total||Democrats||Independents||Republicans| |…protect the health of Americans by preventing the spread of diseases like SARS, bird flu, swine flu, and Ebola||69%||77%||70%||63%| |…improve the U.S. image around the world||57%||68%||59%||46%| |…U.S. national security by lessening the threat of terrorism originating in developing countries||37%||47%||36%||26%| |…the U.S. economy by improving the circumstances of people who can buy more U.S. goods||36%||44%||37%||28%| Priorities Within Global Health Americans often report that improving health in developing countries is one of many priorities it is important for the U.S. to address around the world.1 When asked which health issues are important for the U.S. to support globally, at least 7 in 10 say each area is important. As for the top priorities, more than half of the public say “one of the top priorities” in U.S. efforts in global health is improving access to clean water (57 percent), children’s health and vaccinations (53 percent) and reducing hunger and malnutrition (52 percent). Next behind these basic needs is the fight against the Ebola outbreak in West Africa, mentioned as one of the top priorities by over four in ten of the public (44 percent). In December, after this survey was administered, Congress approved $5.4 billion in appropriations for Ebola and global health security efforts internationally and in the United States.2 Some high profile issues are further down the list including malaria, polio, chronic illness and reproductive health. Ebola Outbreak Captures Public’s Attention, More Now Say It Is Mostly Under Control Attention To The Ebola Outbreak Since the fall, the public has remained captivated by news coverage of the Ebola outbreak in West Africa and the diagnosed cases here in the U.S., perhaps contributing to the sentiment that fighting Ebola should be one of our country’s top global health priorities. In fact, the Kaiser Health Policy News Index finds that news coverage of Ebola in West Africa and in the U.S. ranked among the most closely followed news stories of 2014, and were the most closely followed health-related stories by a large margin. The share of Americans who report following news of the Ebola outbreak in West Africa increased as the outbreak spread. In late summer and early fall, about 6 in 10 Americans reported following news of the outbreak in West Africa closely and about 7 in 10 said they followed news about the virus in the U.S. closely. By late fall, attention to news coverage of Ebola in the U.S. and abroad had grown, with at least three-quarters saying they closely followed the story in November and December. More Now Say Ebola Outbreak in Africa Is Mostly Under Control Many Americans now think the outbreak is under control. Forty-one percent say the epidemic is under control, up significantly from 10 percent in October when Ebola was spreading quickly, and about half (51 percent) say the outbreak is not yet under control. Kaiser Family Foundation 2013 Survey of Americans on the U.S. Role in Global Health, http://kff.org/global-health-policy/poll-finding/2013-survey-of-americans-on-the-u-s-role-in-global-health/ Kaiser Family Foundation, The U.S. Global Health Budget: Analysis of Appropriations for Fiscal Year 2015, http://kff.org/global-health-policy/issue-brief/the-u-s-global-health-budget-analysis-of-appropriations-for-fiscal-year-2015/
<urn:uuid:6b8d2e3a-5d9f-4897-bafe-4c9ede7b0cc1>
CC-MAIN-2016-26
http://kff.org/global-health-policy/poll-finding/data-note-americans-views-on-the-u-s-role-in-global-health/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939309
1,745
2.796875
3
So you've decided to spindle spin. Now what? You need a spindle. Not just any spindle -- a spindle that you will love and that will inspire you to learn something new. There are a lot of different spindles to choose from. How do you pick? What do all of these words mean? Drop spindle? Hand spindle? Top whorl? Bottom whorl? What's a whorl? Takhli? What about all the different woods? What about the weight? At the very basic there are three types of spindle: top whorl, bottom whorl and supported. The whorl is the disk or ball that provides the weight to keep the spin going. All three types are hand spindles. Top and bottom whorl are both types of drop or hand spindles. Drop spindles can spin almost any fiber you want from dog hair to flax to wool to Ingeo to silk. spindles: Bosworth midi, Grafton Mala, Cascade It has a hook on one end above the whorl and a shaft below the whorl for storing your finished yarn. Top whorl spindles can come in many weights and sizes. Top whorl spindles usually spin faster and quite often are lighter than their bottom whorl counterparts. They are great for very fine, lace weight yarns. They also usually are quicker to load since they have no need of wrapping or half-hitching to keep your yarn on the spindle. A bottom whorl spindle usually just has a shaft and a whorl, sometimes a hook and sometimes not. Finished yarn is stored above the whorl. To keep the yarn on the spindle, you usually have to do some wrapping and half-hitching. Bottom whorl spindles also come in various sizes. Bottom whorl spindles are usually less bouncy, spin longer and are better for plying yarns on than top whorl spindles. Of course, you can find rabid advocates of either top whorl or bottom whorl spindles. Just like you can rabid advocates of various types of spinning wheels, fibers, yarns or knitting needles. Your best bet is to take a few spindles for a test drive and decide what you like. Supported spindles are less common in both availability and usage. They can have hooks or not. They can have bead whorls or flat whorls. One end of a supported spindle sits in a bowl or some other shallow container and the working end hangs free. The spindle is spun and twist builds up in the yarn. The spindle is stopped and the twist is drafted out into the fiber. These types of spindles are best for spinning very short fibers like cotton or dryer lint or for spinning very fine. A Takhli is a type of supported spindle. whorl spindles: Schacht, Anne Grout, turkish So, you've decided what type of spindle you want and you're looking at spindle ads. They talk about featherweight, or .05 oz, or maxi, or boat anchor. What are they talking about? It's all about weight. The weight of your spindle to a great extent dictates how thick, or heavy your yarn can be. You can't spin lace weight yarn on a really heavy spindle and you can't spin bulky yarn on a featherweight. Starting out, unless you're really sure you are planning on always making lace or super bulky hats or only plying commercial yarns, you should choose a spindle that weights somewhere between 1.75 to 2.5 oz. This is a nice Okay, you decided on the type. You've decided on the weight. What about woods? There are hundreds of woods that spindles are made of and a million combinations between whorl and shaft. (I won't even mention the ceramic or ploy clay varieties of spindles.) My advice is check the weight, check the price -- make sure both are in your range. Then let the spindle speak to you. Which one is gorgeous? Which one would you cry over if it goes home with someone else? reference to spindle weights. Typically, the heavier the spindle, the fatter the yarn it will spin. Now where to buy a spindle? If you have the opportunity, go to a wool or fiber festival where you get your fingers on a several spindles and try them out. Or find a local spinning guild and do the same thing. A spinning guild has the advantage of usually having a "spindle person" who can give you pointers on how to use your spindle. You can also shop the web. You can find spindle reviews, spindler mail lists and spindle vendors. You can even find directions for making very inexpensive spindles out of old CDs and dowel rods (http://www.interweave.com/spin/resources.asp Here are some places to visit on the web to feed your spindle frenzy an online spindle spinning magazine Four (of the many) spindle vendors on the web that are great at walking you through your first spindle purchase are Bosworth Spindles and Fibers if you want to buy from the spindle artist and The Bellwether and Carolina Homespun if you want a vendor that sells a variety of spindle artists. You've bought your spindle and starter fiber and they are staring you in the face. Where to learn how to spindle? A few great books are Spindle Spinning from Novice to Expert by Connie Delaney, Spin It by Lee Raven and High Whorling by Patricia Gibson-Roberts (out of print but available at many spinning retailers and libraries). This On the web try www.icanspin.com or the (Spin off magazine website ) Or find one of those "spindle people" at a spinning guild or spinning shop. Pick up a spindle. You'll be surprised how fun and relaxing it really is. You might even start looking at your knitting as sucking away your spindle time. Practice as little as 15 minutes every day. You'll be a spinning maniac in a week. to get better at spinning with a spindle? Rebecca supports her fiber habit as a children's librarian in Dearborn, MI. A long-time spindler, she finally broke down and bought a wheel about 4 years ago and finally learned to knit about 6 years ago. Her husband demanded she do something with the yarn she was making. packs a spindle almost everywhere she goes and can be found addicting, um, instructing spindlers with the Spinners' Flock in Chelsea, MI. | text © 2006 Rebecca Hermen, photos © 2006 Jillian Moreno. Contact Rebecca
<urn:uuid:4b08e0c8-97d5-4490-a011-2fad79db8c14>
CC-MAIN-2016-26
http://knitty.com/ISSUEspring06/FEATKSgotspin.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898817
1,568
2.546875
3
"The Whole Nine Yards" Of What? Where does the phrase "the whole nine yards" come from? In 1982, William Safire called that "one of the great etymological mysteries of our time." He thought the phrase originally referred to the capacity of a cement truck in cubic yards. But there are plenty of other theories. Some people say it dates back to when square-riggers had three masts, each with three yards supporting the sails, so the whole nine yards meant the sails were fully set. Another popular story holds that it refers to the length of an ammunition belt on World War II fighters — when a pilot had exhausted his ammunition, he said he had shot off the whole nine yards. Or it was the amount of cloth in the queen's bridal train, or in the Shroud of Turin. Or it had to do with a fourth-down play in football. Or it came from a joke about a prodigiously well-endowed Scotsman who gets his kilt caught in a door. The Internet is full of just-so stories like these. They're often shaky in their facts about ammunition belts or cement trucks, but they come with assurances that the information came firsthand from an old Naval gunnery instructor or a Scottish tailor. It used to be hard to debunk these tales, since the only way to track the expressions down was by rooting around in library stacks and newspaper morgues in search of a revealing early citation. But with the vast historical collections of books and newspapers that are now online, etymology has joined the list of activities you can do in your pajamas. Word-sleuths traced the modern use of "the whole nine yards" as far back as a 1956 article in a magazine called Kentucky Happy Hunting Ground. Now they've discovered an even earlier version of the phrase, "the whole six yards," which was used in the rural South as early as 1912. That's still how the phrase goes in parts of the South, but it was inflated to "nine yards" when it caught on elsewhere, the same way the early 20th-century "cloud seven" was upgraded to our "cloud nine." The unearthing of those early sources was deemed important enough to warrant a story in The New York Times, not an organ that ordinarily treats etymological discoveries as breaking news. True, the findings don't actually settle what if anything the phrase originally referred to. But they put the kibosh on the stories about World War II and the one about cement trucks, which hadn't been invented yet — though, actually, none of these stories was very plausible in the first place. ` Of course there could be a real story behind the expression, even if it's no more than a family joke about the long scarves that Aunt Florence used to knit as Christmas presents. But it could also be that somebody just plucked the words out of the air one Tuesday morning. One way or the other, the real birth of the expression was when somebody passed it along without caring what "nine yards" referred to. The fact is that once you've said "the whole" it doesn't matter what words you finish it with or whether they mean anything or not — shooting match, enchilada, schmear, shebang? "The whole ball of wax" first showed up in the 1880s, though some writers say it comes from a 16th-century ritual for dividing up an estate among heirs. If you believe that, I've got a caboodle I want to sell you. A number of years ago I started saying "the whole kazonga," just because I liked the sound of it. Nobody ever called me on it, but when I finally looked it up it turned out to be the name both of an Italian adult comic book and of a Zambian minister who was involved in a fertilizer scam. In the somewhat unlikely event that "the whole kazonga" ever catches on, you can be sure someone will explain how it originally comes from one or the other of those. Still, it's hard to accept that it doesn't matter where the expression came from. Whether the measure is six yards or nine, it has a tantalizing specificity. It cries out for an explanation, and there are plenty of them at hand. Is it merely coincidence that six yards is the exact diameter of a pitcher's mound? The amount of cloth in a Varanasi sari? The length of a parachute line? But that profusion of possibilities is the key to the idiom's appeal. If "the whole nine yards" had a definitive completion — if it went on to mention yards of cloth, cement or ammunition — it would never have caught on in the first place. It's like a line of poetry; it resonates without resolving. Except that we don't think of this as poetry. A poet's images can bubble straight up out of the imagination; we don't ask for explanations or backstories. Would it really help to know where Gertrude Stein got "pigeons in the grass, alas" from? "Let me see, that was the day when Miss Stein and I were walking in the Luxembourg Gardens, and I started to sit on the lawn but she said, 'No, Alice' ... " But that's just the kind of story we expect when the phrase originates in the collective imagination. So we rummage around in old ships and cement trucks looking for a secret key, as if there couldn't be any poetry in everyday language that didn't begin its life as prose.
<urn:uuid:1dfa081a-7408-49d4-ac7d-09c86d6e4fc2>
CC-MAIN-2016-26
http://krvs.org/post/whole-nine-yards-what
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978833
1,147
2.65625
3