text
stringlengths 116
653k
| id
stringlengths 47
47
| edu_int_score
int64 2
5
| edu_score
float64 1.5
5.03
| fasttext_score
float64 0.02
1
| language
stringclasses 1
value | language_score
float64 0.65
1
| url
stringlengths 14
3.22k
|
---|---|---|---|---|---|---|---|
peg top pants meaning, peg top pants definition | English Cobuild dictionary
1 n-plural Pants are a piece of underwear which have two holes to put your legs through and elastic around the top to hold them up round your waist or hips.
(BRIT) also a pair of N (=knickers)
I put on my bra and pants.
in AM, usually use underpants
2 n-plural Pants are a piece of clothing that covers the lower part of your body and each leg.
(AM) also a pair of N
He wore brown corduroy pants and a white cotton shirt.
in BRIT, use trousers
3 n-uncount If you say that something is pants, you mean that it is very poor in quality.
INFORMAL The place is pants, yet so popular.
4 If someone bores, charms, or scares the pants off you, for example, they bore, charm, or scare you a lot.
the pants off phrase v PHR (emphasis) You'll bore the pants off your grandchildren...
5 If you fly by the seat of your pants or do something by the seat of your pants, you use your instincts to tell you what to do in a new or difficult situation rather than following a plan or relying on equipment.
by the seat of one's pants phrase V inflects
to wear the pants
Translation English - Cobuild Collins Dictionary
Collaborative Dictionary English Cobuild
I know four celebrities - top that!
The duck's nuts, the best, the top.
a pair of straps used by men for holding up trousers or pants
Reverso Community
• Create your own vocabulary list
• Contribute to the Collaborative Dictionary
• Improve and share your linguistic knowledge | <urn:uuid:1d2066f2-a5dc-4f73-8913-c4ff9bd9e219> | 2 | 1.765625 | 0.078758 | en | 0.828701 | http://dictionary.reverso.net/english-cobuild/peg%20top%20pants |
Cookie consent
North Korea puts rockets on standby
Just Watched
North Korea puts rockets on standby
North Korea puts rockets on standby 02:50
Story highlights
• A North Korean photo shows Kim Jong Un meeting with military officials
• North Korean media: Rockets should be ready to "mercilessly strike" the U.S.
• A Pentagon spokesman urges North Korea to "dial down the temperature"
• "No one wants there to be war on the Korean Peninsula," he says
In a meeting with military leaders early Friday, Kim Jong Un "said he has judged the time has come to settle accounts with the U.S. imperialists in view of the prevailing situation," the state-run KCNA news agency reported.
Analysis: Just what is Kim Jong Un up to?
Later Friday, North Korean state media carried a photo of Kim meeting with military officials. The young leader is seated in the image, leafing through documents with four uniformed officers standing around him.
On the wall behind them, a map entitled "Plan for the strategic forces to target mainland U.S." appears to show straight lines stretching across to the Pacific to points on the continental United States.
South Korea and the United States are "monitoring any movements of North Korea's short, middle and middle-to-long range missiles," South Korean Defense Ministry Spokesman Kim Min-seok said Friday.
Kim's regime has unleashed a torrent of threats in the past few weeks, and U.S. officials have said they're concerned about the recent rhetoric.
"North Korea is not a paper tiger, so it wouldn't be smart to dismiss its provocative behavior as pure bluster," a U.S. official said Wednesday.
Little: We will protect South Korea
Just Watched
Little: We will protect South Korea
Little: We will protect South Korea 03:52
How far can North Korean missiles go?
Just Watched
How far can North Korean missiles go?
How far can North Korean missiles go? 01:10
But Pentagon spokesman George Little said Thursday that it was important to remain calm and urged North Korea to "dial the temperature down."
"No one wants there to be war on the Korean Peninsula, let me make that very clear," he told CNN's "Erin Burnett Outfront."
North Korea's threat: Five things to know
The mission by the B-2 Spirit bombers, which can carry conventional and nuclear weapons, "demonstrates the United States' ability to conduct long-range, precision strikes quickly and at will," a statement from U.S. Forces Korea said.
The North Korean state news agency described the mission as "an ultimatum that they (the United States) will ignite a nuclear war at any cost on the Korean Peninsula."
B-2 exercise over Korean Peninsula
Just Watched
B-2 exercise over Korean Peninsula
B-2 exercise over Korean Peninsula 02:31
U.S. response to North Korea threats
Just Watched
U.S. response to North Korea threats
U.S. response to North Korea threats 03:31
South Korea honors 'Day of Terror'
Just Watched
South Korea honors 'Day of Terror'
South Korea honors 'Day of Terror' 01:52
The deteriorating relations have put paid to any hopes of reviving multilateral talks over North Korea's nuclear program for the foreseeable future. Indeed, Pyongyang has declared that the subject is no longer up for discussion.
While Kim appears to have spurned the prospect of dialog with U.S. and South Korean officials, he met with Dennis Rodman during the U.S. basketball star's bizarre recent visit to North Korea.
North Korea has gone through cycles of "provocative behavior" for decades, Little said Thursday.
Little said Thursday that the United States was keeping a close eye on North Korea's missile capabilities.
Korean nightmare: Experts ponder potential conflict | <urn:uuid:0431f493-fe7b-4288-a3d7-590ed1bd25ba> | 2 | 1.609375 | 0.071892 | en | 0.949207 | http://edition.cnn.com/2013/03/28/world/asia/north-korea-us-threats/index.html?iref=mpstoryview |
Last modified on 23 February 2009, at 20:53
Allan McLeod Cormack
Allan MacLeod Cormack (February 23, 1924May 7, 1998) was a South African-born American physicist who won the 1979 Nobel Prize in Physiology or Medicine (along with Godfrey Hounsfield) for his work on x-ray computed tomography (CT).
External linksEdit
Wikipedia has an article about: | <urn:uuid:721cef86-82d7-4677-b3ff-8e7751b1f49e> | 2 | 2.34375 | 0.072402 | en | 0.941679 | http://en.m.wikiquote.org/wiki/Allan_McLeod_Cormack |
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In rhetoric, anthimeria, traditionally and more properly called antimeria (from the Greek: ἀντί, antí, "against, opposite" and μέρος, méros, "part"), is any novel change in a word's use, most commonly the use of a noun as if it were a verb.[1][2]
Modern Examples[edit]
There are a number of examples throughout the English language that demonstrate the evolution of specific words from one lexical category to another. For example, the word 'chill' originated as a noun that could be substituted as a synonym for 'cold'. Throughout the years, 'chill' grew to transition into a verb ('to chill vegetables') and then, subsequently, an adjective ('a chilly morning'). Most recently, 'chill' has yet again transformed into another part of speech, an "intransitive verb, meaning roughly 'to relax',"[3] as author Ben Yagoda explains; Yagoda then quotes what he determines to be the starting point of this lexical shift through referencing the lyrics of The Sugarhill Gang's 1979 hit 'Rapper's Delight': "There's...a time to break and a time to chill/ To act civilized or act real ill".
A more unusual case of anthimeria is displayed not through a change in lexical category, but a change in form altogether. The punctuation mark '/' was originally implemented to juxtapose two similarly related words or phrases, such as a 'friend/roommate', meaning that the referred person is both a friend and roommate to the speaker. However, younger generations have come to morph the symbol '/' into the written and spoken word of 'slash'. Anne Curzan, a professor of English at the University of Michigan, notes that the "emergence of a new conjunction/conjunctive adverb (let alone one stemming from a punctuation mark) is like a rare-bird sighting in the world of linguistics: an innovation in the slang of young people embedding itself as a function word in the language".[4]
The form change from symbol to word also brought about a change in usage, as the situational context of '/' was completely modified to conform to the needs of this new word. The usage of 'slash', according to Curzan, has multiple contextual uses, including the "distinguishing between (a) the activity that the speaker or writer was intending to do or should have been doing, and (b) the activity that the speaker or writer actually did or anticipated they would do...".[4] Curzan also finds that 'slash' has been used to "link a second related thought or clause to the first" as well as simply "introduc[ing] an afterthought that is also a topic shift".[4] Dispersed throughout her blog post entitled "Slash: Not Just a Punctuation Mark Anymore", Curzan has compiled a list of the numerous cases in which 'slash' can be employed, a set of data that she obtained through contributions from students in her undergraduate history of English course. A few examples include:
• "I went to class slash caught up on Game of Thrones."
• "Does anyone care if my cousin comes and visit slash stays with us Friday night?"
• "Has anyone seen my moccasins anywhere? Slash were they given to someone to wear home ever?"
Temporary vs. Permanent Usage[edit]
When classifying anthimeria, it is important to determine the difference between words that are popular for the time being as opposed to words that have become permanent fixtures in the English language. As noted above, the use of 'chill' has become a common occurrence in standard English, and the still-transitioning use of 'slash' seems to be well on its way to becoming a permanent conjunction. While still in the transformation stage of usage though, the majority of newly created words are revealed to be only fads, developed to serve a purpose only while the trend runs it course. Helen Sword, a professor at the University of Auckland, provides an example in the verb Eastwood, a craze that swept the nation following Clint Eastwood's speech at the 2012 Republican National Convention. "Within weeks, the fad for Eastwooding - talking to an empty chair - had already petered out".[5] Although relevant to the current time period, the distinction between temporary and permanent "verbifications" and their equivalents is necessary in noting the evolution of the English language.
See also[edit]
1. ^ Corbett, Edward P. J. Classical Rhetoric for the Modern Student. Oxford University Press, New York, 1971.
2. ^ {{cite book|author=Jay Heinrichs|title=Thank You For Arguing, Revised and Updated Edition: What Aristotle, Lincoln, And Homer Simpson Can Teach Us About the Art of Persuasion|url=|date=6 August 2013|publisher=Crown Publishing Group|isbn=978-0-385-34778-5|page=281}}
3. ^ Yagoda, Ben. "Language: The moving parts of speech". The New York Times. Retrieved 23 October 2013.
4. ^ a b c Curzan, Anne. "Slash: Not Just a Punctuation Mark Anymore". The Chronicle of Higher Education. Retrieved 23 October 2013.
5. ^ Sword, Helen (October 27, 2012). "Mutant Verbs". The New York Times. Retrieved 23 October 2013. | <urn:uuid:54235abf-a4bc-440b-b056-e1e7e53f1733> | 3 | 2.84375 | 0.030225 | en | 0.936563 | http://en.wikipedia.org/wiki/Anthimeria |
Cello Concerto No. 1 (Shostakovich)
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The Cello Concerto No. 1 in E-flat major, Opus 107, was composed in 1959 by Dmitri Shostakovich. Shostakovich wrote the work for his friend Mstislav Rostropovich, who committed it to memory in four days and gave the premiere on October 4, 1959, with Yevgeny Mravinsky conducting the Leningrad Philharmonic Orchestra in the Large Hall of the Leningrad Conservatory. The first recording was made in two days following the premiere by Rostropovich and the Moscow Philharmonic, under the baton of Aleksandr Gauk.[1]
Scoring and structure[edit]
The concerto is scored for solo cello, two flutes (2nd doubling piccolo), two oboes, two clarinets (each doubling B-flat and A), two bassoons (2nd doubling contrabassoon), one horn, timpani, celesta, and strings.
The work has four movements in two sections, with movements two through four played without a pause:
1. Allegretto
2. Moderato
3. CadenzaAttacca
4. Allegro con moto
A typical performance runs approximately 28 minutes in length.
The first concerto is widely considered to be one of the most difficult concerted works for cello, along with the Sinfonia Concertante of Sergei Prokofiev, with which it shares certain features (such as the prominent role of isolated timpani strokes). Shostakovich said that "an impulse" for the piece was provided by his admiration for that earlier work.[2]
The first movement begins with its four-note main theme derived from the composer's DSCH motif, although the intervals, rhythm and shape of the motto are continually distorted and re-shaped throughout the movement. It is also related to a theme from the composer's score for the 1948 film The Young Guard, which illustrates a group of Soviet soldiers being marched to their deaths at the hands of the Nazis. The theme reappears in Shostakovich's String Quartet No. 8 (1960). It is set beside an even simpler theme in the woodwind, which reappears throughout the work:
The opening bars of the first movement in piano and cello reduction, showing the initial themes of the cello and woodwind.
The woodwind theme taking on aspects of the DSCH theme itself just before the introduction of the second subject:
The DSCH motive recurs throughout the concerto (except in the second movement), giving this concerto a cyclic structure.
One further theme (at bar 96), originating in folk lullabies, is also found in the lullaby sung by Death to a sick child in Mussorgsky's Songs and Dances of Death.
The second, third and fourth movements are played continuously. The second movement is initially elegiac in tone. The string section begins with a quiet theme that is never played by the solo cello. The horn answers and the solo cello begins a new theme. The orchestra plays it after and the first theme is played again. The cello plays its second theme, which progressively becomes more agitated, building to a climax in bar 148. This is immediately followed by the first theme played loudly. The solo cello plays the its first melody in artificial harmonics with answers by the celesta, which leads into the cadenza. The second movement is the only movement with no reference to the DSCH motive.
The cadenza stands as a movement in itself. It begins by developing the material from the cello's second theme of the second movement, twice broken by a series of slow pizzicato chords. After the second time this is repeated, the cello's first theme of the second movement is played in an altered form. After the third time the chords are repeated, a continual accelerando passes through allegretto and allegro sections to a piu mosso section. These sections are frequented by the first DSCH motive. The piu mosso section features fast ascending and descending scales.
The final movement begins with an ascent to a high D. The oboe begins the main theme, which is based on the chromatic scale. The cello repeats it, and presents a new theme. The cellos of the orchestra repeat this, accompanied by the solo cello playing fast sixteenth notes. At bar 105, a distorted version of Suliko, a song favoured by Stalin and used by Shostakovich in Rayok, his satire on the Soviet system, is played. Then, the flutes play the first theme again. A new theme played in triple time is presented by the orchestra, which is repeated by the cello. Then, the orchestra repeats and alters the theme. The horn, bass instruments and solo cello follow. The bass instruments play a modified version of the theme, which is repeated by the solo cello after. The cello begins playing a new theme that uses exactly the same notes as the DSCH motif. The modified version that was just played by bass instruments is repeated by the solo cello, accompanied by oboes playing fragments of the new DSCH theme. The first theme of this movement is played by the string section after, followed by the new DSCH theme in the woodwinds. The DSCH theme of the first movement is played, answered by the cello. After the third time this is played, the horn plays the theme again in longer notes. Then, the cello plays a passage from the first movement, which is followed by the first theme of this movement played by the woodwinds. This is followed by the first theme of the first movement played by the cellos of the orchestra, accompanied by scales in the solo cello. Then, a modified form of the first theme of this movement is played in the cello. The concerto ends with seven timpani strokes.
Recordings of this work include the following:
1. ^ “Rostropovich Plays Shostakovich.” Liner notes. Supraphon, 2013. CD.
2. ^ Shostakovich was quoted in Sovetskaya Kultura, June 6, 1959
3. ^ "‘Blistering’ is the single word which remained rattling around in my skull after an initial run though of this disc." -Dominy Clements, MusicWeb International
4. ^ Schiff was awarded the Grand Prix du Disque for his recording Heinrich Schiff website
5. ^ Gutman's 1976 recording with Kondrashin is superior to her 1988 recording with Temirkanov according to a review in DSCH Journal by Louis Blois.
6. ^ BBC Radio 3's Building a Library: Shostakovich feature (circa 2005-2007) ranked this recording as the first choice for Shostakovich's Cello Concerto No.1
7. ^ "cassette-quality sound" according to Louis Blois writing in DSCH Journal
External links[edit] | <urn:uuid:1af907f6-dd4f-4d2c-845f-eee91b94ac06> | 2 | 1.859375 | 0.04015 | en | 0.961605 | http://en.wikipedia.org/wiki/Cello_Concerto_No._1_(Shostakovich) |
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For the Chinese custom, see foot binding.
Footwraps used by the Finnish Army until the 1990s
Footwraps (also referred to as foot cloths, rags, bandages or bindings, or by their Russian name portyanki) are rectangular pieces of cloth that are worn wrapped around the feet to avoid chafing, absorb sweat and improve the foothold. Footwraps were worn with boots before socks became widely available, and remained in use by armies in Eastern Europe up until the beginning of the 21st century.
Putting on footwraps
Footwraps are typically square, rectangular or less often triangular.[1] They measure about 40 centimetres (16 in) on each side if square or about 75 centimetres (30 in) on each side if triangular. Thinner cloth may be folded to produce a square, rectangular or triangular shape after folding. Russian army footwraps were made of flannel for use in winter and of cotton for use in summer.[2]
Apart from being cheaper and simpler to make or improvise, footwraps are also quicker to dry than socks and are more resistant to wear and tear: any holes can be compensated for by re-wrapping the cloth in a different position. Their principal drawback is that any folds in the wraps, which easily occur during marching unless the wraps are very carefully put on, can quickly cause blisters or wounds. Consequently, armies issued detailed instructions on how to put on footwraps correctly.
Footwraps are notorious for the foul smell that they develop when worn under military conditions, where soldiers are often unable to change or dry the cloths for days. Russian veterans used to jokingly pride themselves about the stench of their footwear, referring to their footwraps as "chemical weapons" that would defeat any enemy unaccustomed to the smell.[3]
Military use[edit]
Soviet soldier drying his footwraps
Footwraps were issued by armies and worn by soldiers throughout history, often long after civilians had replaced them with socks. Prior to the 20th century, socks or stockings were often luxury items affordable only for officers, while the rank and file had to use wraps.
Prussian soldiers wore Fußlappen, footwraps. An 1869 "Manual of Military Hygiene" advised: "Footwraps are appropriate in summer, but they must have no seams and be very carefully put on; clean and soft socks are better."[4] An 1867 German dictionary of proverbs records the following saying: "One's own footwrap is better than someone else's boot."[5]
The German Wehrmacht used footwraps until the end of World War II. They continued to be worn in the East German National People's Army until 1968.
Eastern Europe[edit]
The Russian and later Soviet armed forces issued footwraps since Peter the Great imported the custom from the Dutch Army in the 1690s.[6] Footwraps remained standard issue in many Warsaw Pact armies. The Belarusian, Ukrainian and Georgian armies eventually abandoned them in favor of socks in the 2000s.[2][7] In each case, nostalgia about the traditional footwear ran high among soldiers. The Ukrainian army held a special farewell ceremony for its footwraps, with soldiers reciting poems and fables about them.[3]
In the Russian army, footwraps remained in use for tasks requiring the wear of heavy boots until 2013, because they were considered to offer a better fit with standard-issue boots. Because of their association with the Russian army, footwraps are called chaussettes russes (Russian stockings) in French.
1. ^ The triangular version is recommended as a sock substitute by Volz, Heinz (2008). Überleben in Natur und Umwelt. Walhalla & Praetoria Verlag. p. 217.
2. ^ a b Armstrong, Jane (Dec 26, 2007). "Russian military adopts a modern touch: socks". Globe and Mail. Retrieved 9 September 2010.
3. ^ a b O'Flynn, Kevin (December 19, 2007). "Goodbye to the Footcloth, Hello to the Sock". Moscow Times. Retrieved 11 September 2010.
4. ^ Kirchner, Carl (1869). Lehrbuch der Militär-Hygiene. Enke. p. 329. Fusslappen sind im Sommer zweckmässig, doch müssen sie ohne Nähte sein und sehr sorgfältig angelegt werden; reingehaltene, weiche Socken sind besser
5. ^ Wander, Karl Friedrich Wilhelm, ed. (1867). Deutsches Sprichwörter-Lexikon: Ein Hausschatz für das deutsche Volk. F. A. Brockhaus. p. 1307. Ein eigener Fusslappen ist besser als ein fremder Stiefel
6. ^
7. ^ Liss, Artyom (19 February 2007). "Armies boot out Soviet tradition". BBC News. Retrieved 9 September 2010. | <urn:uuid:c106ed1d-85bf-49db-ae8c-9640ec0e5bc8> | 3 | 3 | 0.18695 | en | 0.88565 | http://en.wikipedia.org/wiki/Footwraps |
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Octroi (French pronunciation: [ɔktʁwa]; Old French: octroyer, to grant, authorize; Lat. auctor) is a local tax collected on various articles brought into a district for consumption.
Octroi taxes have a respectable antiquity, being known in Roman times as vectigalia. These vectigalia were either the portorium, a tax on the entry from or departure to the provinces (those cities which were allowed to levy the portorium shared the profits with the public treasury); the ansarium or foricarium, a duty levied at the entrance to towns; or the edulia, sales imposts levied in markets. Vectigalia were levied on wine and certain articles of food, but it was seldom that the cities were allowed to use the whole of the profits of the taxes. Vectigalia were introduced into Gaul by the Romans, and remained after the invasion by the Franks, under the name of tonlieux and coutumes. They were usually levied by the owners of seigniories.[1]
Middle Ages[edit]
During the 12th and 13th centuries, when the towns succeeded in asserting their independence, they at the same time obtained the recognition of their right to establish local taxation, and to have control of it. The royal power, however, gradually asserted itself, and it became the rule that permission to levy local taxes should be obtained from the king. From the 14th century onwards, there are numerous charters granting (octroyer) to French towns the right to tax themselves. The taxes did not remain strictly municipal, for an ordinance of Cardinal Mazarin (in 1647) ordered the proceeds of the octroi to be paid into the public treasury, and at other times the government claimed a certain percentage of the product, but this practice was finally abandoned in 1852.[1]
Tax farming[edit]
From an early time, octroi collection was farmed out to associations or private individuals, and so great were the abuses which arose from the system that the octroi was abolished during the French Revolution. But such a drastic measure meant the stoppage of all municipal activities, and in 1798 Paris was allowed to re-establish its octroi. Other cities were allowed gradually to follow suit, and in 1809 a law was passed laying down the basis on which octrois might be established. Other laws were passed from time to time in France dealing with the octroi, in 1816, 1842, 1867, 1871, 1884, and 1897. By the law of 1809 octroi duties were allowed on beverages and liquids, food, fuel, forage, and building materials. A scale of rates was fixed, graduated according to the population, and farming out was strictly regulated. Under the law of 1816, an octroi could only be established at the wish of a municipal council, and only articles destined for local consumption could be taxed. The law of 1852 ended the payment of 10% of the gross receipts to the national treasury. Certain indispensable commodities were allowed to enter free, such as grain, flour, fruit, vegetables, and fish.[1]
French octroi duties were collected by several procedures.[1]
1. The regie simple, i.e. by special officers under the direction of the mayor. By the first decade of the 20th century more than half the octrois were collected this way, and this proportion tended to increase.
2. The bail à ferme, i.e. farming. The "tax farmer" was authorized to colletct the octroi, and in return contracted to pay the municipality a yearly amount, based on the estimated revenues. Use of this method steadily decreased.
3. The regie interesse, a variation of the preceding method. The contractor paid the municipality a fixed annual sum, representing the municipality's share of estimated revenue, plus a share of revenues in excess of the estimate. This method had been practically abandoned by the first decade in the 20th century.
4. The abonnement avec la regie des contributions indirectes, under which a department of the treasury undertook to collect the duties. Use of this method was increasing by the first decade in the 20th century.
Gross octroi receipts in 1901 amounted to 11,132,870 francs. A law of 1897 created new sources of taxation, giving communes the option of:[1]
1. New duties on alcohol.
2. A municipal license duty on retailers of beverages.
3. A special tax on wine in bottle.
4. Direct taxes on horses and carriages, clubs, billiard tables, and dogs.
5. Additional centimes to direct taxes.
From time to time there was agitation in France for the abolition of octroi duties, but it was never pushed very earnestly. In 1869, a commission considered the matter, and reported in favour of their retention. Octrois were finally abolished in 1948.[1]
In Belgium, on the other hand, octrois were abolished in 1860, being replaced by an increase in customs and excise duties; and in 1903 those in Egypt were also abolished.[1]
A similar tax, called the Alcabala, was collected in Spain and the Spanish colonies. This tax was in force in Mexico until a few years before the Mexican Revolution of 1910.[2] In 1910, octroi duties still existed in Italy, Spain, Portugal, and some towns in Austria.[1]
Octroi was still in use in the 1990s by local authorities in Pakistan for domestic goods movements. Although abolished for general trade in 1997, octroi was still being charged on certain commodities such as electricity as late as 2006. As of 2013, octroi is levied in Ethiopia and in the Indian state of Maharashtra.[3]
The octroi was discontinued in Mumbai and other municipal corporations in 2013 and replaced with a Local Body Tax (LBT).[4]
1. ^ a b c d e f g h Chisholm 1911, p. 994.
2. ^ Rines 1920.
3. ^ As an example details on octroi in Maharashtra can be found at www.punecorporation.org[citation needed]
4. ^ Business Standard staff 2013.
• This article incorporates text from a publication now in the public domainIngram, Thomas Allan (1911). "Octroi". In Chisholm, Hugh. Encyclopædia Britannica 19 (11th ed.). Cambridge University Press. p. 994. Endnotes:
• A. Guignard, De la suppression des octrois (Paris);
• Saint Julien and Bienaim, Histoire des droits d'octroi à Paris;
• M. Tardit and A. Ripert, Traite des octrois municipaux (Paris, 1904);
• L. Hourcade, Mcmuel encyclopedique des contributions indirectes et des octrois (Paris, 1905);
• Report on the French Octroi System, by Consul-general Hearn (British Diplomatic and Consular Reports, 1906);
• Abolition des octrois communaux en Belgique: documents et discussions parlementaires (a Belgan official report)
• Wikisource-logo.svg This article incorporates text from a publication now in the public domainRines, George Edwin, ed. (1920). "Octroi". Encyclopedia Americana.
External links[edit] | <urn:uuid:1bc95670-6304-45de-97ab-aa4c47ecfa4f> | 3 | 2.96875 | 0.043485 | en | 0.94412 | http://en.wikipedia.org/wiki/Octroi |
1922 Encyclopædia Britannica/Lehmann, Liza
From Wikisource
Jump to: navigation, search
LEHMANN, LIZA (1862-1918), English singer and composer, was born in London July 11 1862, the daughter of the artist Rudolf Lehmann. She studied singing under Alberto Randegger and Hamish MacCunn, making her début in 1885, and became extremely popular as a concert singer. In 1894 she married Herbert Bedford, the composer, and retired from the concert platform, devoting herself henceforward chiefly to composition. Her most popular works are the song cycles In a Persian Garden (1896, words from the Rubaiyat of Omar Khayyam) and The Daisy Chain (1900), and various Shakespearean songs, while she also produced a light opera, The Vicar of Wakefield (1907); the music for the farce Sergeant Brue (1904) and the morality play Everyman (1915). Madame Lehmann became well known as a teacher of singing. She died at Hatch End, Pinner, Sept. 19 1918. | <urn:uuid:e9fdee66-047b-4699-b783-5378803b08a4> | 2 | 2.015625 | 0.116114 | en | 0.961654 | http://en.wikisource.org/wiki/1922_Encyclop%C3%A6dia_Britannica/Lehmann,_Liza |
From Wikisource
Jump to: navigation, search
George H. Earle, Jr., affirmed.
Examined by Senator Dunlap :
Q. You are connected with the Guarantee Trust Company.
A. President of the Pennsylvania Warehousing Company, vice president of the Guarantee, and a director in the Equitable Trust Company.
Q. You know the purpose of this investigation.
A. Yes ; I think the failures were due to the fact that the private banks depended on private credit. Wherever you have a money scare they will be hurt the most. When a stringency came the private bankers felt it the great institutions did not feel it so much ; feeling it that way they had no one to help them. The banks and trust companies were ready to see each other out, and for that reason you had a larger number of failures among private banks. In the question of state supervision people who are connected with national banks have had things to say about trust companies. The laws under which trust companies were organized was wise legislation. I never heard of the failure of a trust company ; The trust companies have stood up through all times ; there is good reason for it. You take national banks which discount paper in time of great stringency, they have two names to look to, the trust company has security. Every trust company only permits loans on listed security, so that the officer has no chance to make a mistake. If they want a loan from a trust company other than on a listed security it has to go before the board.
Q. Did you hear Colonel Bosbyshell.
A. No, sir ; you should bear in mind there never has been a failure of a trust company. This committee is not considering the interest of the banks or the trust companies or the institutions. It is considering the general good of the people. You must consider how are the people to get the use of money most readily. In making a law requiring a reserve you consider the prudence. You should therefore consider what is the least prudent reserve, my view of the reserve is not that it is to be kept but it is to be used. I do not think that twenty-five per cent. would be of any use. In Philadelphia there was more than enough money to make things easy here. In the vaults was enough money to supply all wants. If you take a trust company that only loans on collateral and make it carry a large reserve you put a restriction on companies that never had a failure.
Q. Do you apply that to state banks.
A. They do a different business. They loan money. If a man wants an active business account he wants to keep it in a bank. The bank have active accounts. In times of stringency the banks must use their money. I do not believe that any trust company in Philadelphia ran down ten per cent. in any one month of the time, but the banks have active accounts and discount paper which is running on time. In the companies I am in I think ninety per cent of the moneys are loaned on collateral and on call, and while not absolute it is pretty near reserve. Take Reading fours, you could sell 1,000,000 a day, and Pennsylvania railroad, and when you get to a stock like Reading, you could sell $10,000 worth of that without a ripple. I think it would be wise to require trust companies to keep a ten per cent. reserve. In the working of the national bank act, the bank examiners were careful not to call for a statement during the panic.
If a bank is solid I do not think there ought to be any want of liberality in using their reserve. On the subject of private banks I think every private banker ought to be compelled to do his business in his own name ; no fancy name should be allowed. If a man wants to put his money in the hands of an individual I think he should be permitted to do so. A private banker should be compelled to do business in his own name and state that he is a banker. All business men are private bankers they are all quasi-trustees. If you go in and interfere with the private banks you should go into other private business. I want very strongly to say that as to trust companies they receive a thorough examination from the courts, the system seems to have worked well. I have a strong tendency to letting well-enough alone.
Q. You are not in favor of an additional examination.
A. I do not think it is necessary. During the last panic there was a feeling against every one. If we could have called upon some one to examine the trust companies it would possibly have restored confidence. I do not agree with the gentleman who say that the state ought to pay these gentlemen. I think if a business is conducted for my benefit I should pay for it. If it is compulsory they could not help themselves. I do not see that the examiner could be possibly bought. So I would make the state officials entitled to ride on the railroad cars so it would not seem a favor to them.
Q. How should the examiner be appointed.
A. I would be opposed to having any rival order of institutions appoint an examiner for my institutions. I think he should be appointed by the officers of the state.
Examined by Mr. Flad :
Q. You assume that there shall be a published statement of the results.
A. That I had not thought about. In New York they have had a marked experience in that. In New York some of the banks were liberal, they said if our statements are published and our depositors see we are running on a weak reserve it might cause a lack of confidence : my own judgment is that the public statements should be only made by the head of the department. It might be unwise to publish to the community that on a given day they were under the reserve. If you publish five millions of deposits and only five hundred thousand dollars worth of reserve, I think the bank might be good, yet it would hurt the bank, I don't think public statements are of much use.
Q. This morning the superintendent of the mint gave us some information as to the bond and investment companies that are being formed. Do you know of this.
A. There are some of them that are good and some of them are going to rob the people. They go to good companies and open accounts and then they advertise so and so deposit agency, and do tricks of that kind. One of the companies I am connected with investigated it and came to the conclusion that it was a certain losing scheme. Some of them are real charities and unquestionably useful. I think the state should interfere and bring them under the rules and laws of the state. I should think that the trust companies had plenty of safeguards against everything except fraud and that we cannot prevent, if you will put in the law the right of the trust companies at any time to ask for an examination. I think there should be a supervision of state banks as of national banks and that trust companies should have he right to ask for an examination if they desire. I would not have them published.
Q. Then you would make such publication at the discretion of the head of the departmant.
A. Yes, sir ; in ordinary times they would like to publish the statements. If you made it compulsory at all times it might make trouble. Take the Fidelity with fourteen million dollars worth of deposit.
| <urn:uuid:d2dd17d5-2d52-4cb2-b9a7-c609f6db4049> | 2 | 1.640625 | 0.037684 | en | 0.98246 | http://en.wikisource.org/wiki/Testimony_of_George_H._Earle,_Jr._(7_Feb_1891) |
Take the 2-minute tour ×
In the sentence below, is the comma optional or should it (not) be there? I can hear it there when this is spoken, but I am not convinced it needs to be there in written form.
In order to pass [...] data protection, the customer must correctly answer [...]
As one could simply reorder the elements of the sentence:
The customer must correctly answer [...] in order to pass [...] data protection.
and no comma would be needed.
share|improve this question
closed as general reference by MετάEd, FumbleFingers, cornbread ninja 麵包忍者, tchrist, StoneyB Oct 2 '12 at 1:11
Hearing it should be your guide. Written language is a symbolic rendering of speech. Punctuation is a symbolic rendering of the flow of speech. – bib Sep 6 '12 at 14:24
The fact that you can reorder or reword a sentence to use a certain punctuation tells you little about the proper punctuation of the original. At that point it's a different sentence. – Jay Sep 6 '12 at 14:45
@WillHunting I agree it is not perfect, [pause] but it is often a pretty good guide. If anything, additional punctuation is often required to help organize longer, more complex thoughts, many of which are more convoluted than our nautral speech patterns [like this sentence]. But where there is a natural pause, some punctuation is almost always helpful. – bib Sep 6 '12 at 15:07
2 Answers 2
up vote 3 down vote accepted
In the first sentence, it is good to have a comma but not wrong to omit it. In the second, there should not be a comma.
share|improve this answer
When you use "in order to ..." clause at the beginning, you'd better use a comma before starting the main clause.
share|improve this answer
This is a very short answer. It could be improved by adding supporting facts or references. Please see the faq. – MετάEd Sep 8 '12 at 19:29
| <urn:uuid:5c6a4d03-d135-4b88-ac50-d9d86ffe4917> | 2 | 1.742188 | 0.897535 | en | 0.921446 | http://english.stackexchange.com/questions/80825/is-a-comma-in-this-sentence-required |
Sunday, February 5, 2012
Assault of Thoughts - 2/5/2012
- David Henderson on five myths about free markets. I haven't listened to this yet, but past talks by David along similar lines have been very good. I think some people are under the mistaken impression that these myths are held by left-of-center economists. I think this is largely untrue and really a straw man I have to deal with a lot. But there are large segments of the population that think this sort of thing, and Henderson's clear exposition is great for meeting that.
- A great article in the New York Times by Christina Romer explaining why manufacturing isn't special and shouldn't get special treatment. Yes, yes - be nostalgic if you want. But don't use that as an excuse to be reactionary, regressive, or protectionist. The "decline of manufacturing" is actually not a decline in manufacturing per se, but a decline in manufacturing employment. And that is a very good trend. We can do things to help those who are hurt by the dislocation, but don't mix that up with an argument for standing against progress.
- Arrggh. Arnold Kling on 1946. It's like these people don't even realize that a counter-argument exists. It's like they don't even realize that most Keynesians in the early to mid forties were not saying this. Samuelson bent over backwards in his chapter in the Harris volume to point out that he was swimming against the tide, that he was making recommendations that ran counter to thoses of a lot of his fellow Keynesians. Dealing with the 1946 issue feels like dealing with 1920-1921 all over again. I'm not saying it's a fantastic vindication of Keynesianism and an assault on your theory or anything. I'm simply saying that theories like Keynesianism don't last as long as they do by being so vulnerable to such a simplistic chain of logic. Arnold, if you think this is a decisive case of Keynesians getting it wrong, you're working off of a strawman version of Keynesianism. Here's the end of my public service announcement: Keynesianism does not, and never has claimed, that government is the only source of demand or that the economy can't recover without government spending or that government cut backs will doom the economy in all circumstances. In some cases, government cut backs can be quite beneficial for the economy.
- LK has a discussion of Hayek's prediction of the Great Depression. He raises serious doubts about the veracity of the claim.
1. What book is that Samuelson discussion in? I have not really been that interested in this part of the reconversion debate (because its pointless) but I guess since its cropping up now I might as well collect citations.
1. He probably says it elsewhere too, but what's often quoted is his chapter in Seymour Harris's "Postwar Economic Problems" (1943). What I like about it is that Samuelson actually comes out and says he's more pessimistic than other analysts.
Hansen has a chapter in that one too that is more upbeat.
I was curious if you had thoughts on this - let me know if you do. I haven't read all that extensively, but I've read enough to know that (1.) Samuelson wasn't representative and he was aware of this, and (2.) nothing in Keynesian thinking - even at this early date - necessitated Samuelson's fears. This isn't to say he didn't use Keynesian arguments - he did. But there were excellent reasons to suspect the postwar period would not see a resumption of the depression, and indeed many Keynesians did not expect such a thing.
2. Could you, as a center-left Keynesian, give an example of a situation where you would personally advocate big budget cuts? Let's say they would have to be drastic enough that defense cuts (or other cuts that liberals approve of) wouldn't be enough. And just to drive the point home with a power tool, tax increases on the rich aren't enough either.
The reason I'm asking this: you keep telling us that Keynesians (particularly Krugman) really aren't the vicious anti-market spendthrifts that libertarians seem to think they are, but I honestly haven't seen Krugman make a pro-market argument in years. Liberal commentators and bloggers have made the word "austerity" almost synonymous with "rape the poor", and Krugman is only fanning the flames, so I'd be very surprised if he ever advocated major budget cuts.
3. Daniel Kuehn, whenever you get around to that Keynes and the post-war Keynesians article, will you cite David Colander's article on Keynes and Lerner regarding Keynes's opposition to deficit finance?
4. Daniel, in light of Russ Roberts' post at, you might want to cite some evidence that most Keynesians were not predicting another depression.
1. I have on here before, and I think I'm going to start collecting them all together soon.
A good place to start is for people to read the entire Samuelson chapter. He comes out and says he knows his view is unpopular.
All anonymous comments will be deleted. Consistent pseudonyms are fine. | <urn:uuid:27d887c2-338c-4710-b5a6-38029ce362de> | 2 | 1.5 | 0.404778 | en | 0.978104 | http://factsandotherstubbornthings.blogspot.com/2012/02/assault-of-thoughts-252012.html |
What is a paphead, or paph-head
michigoose(Z5OH)January 1, 2013
Paph-head or paphhead or any variation thereof is an orchid grower who is very fond of paphiopedilums (paphs), or lady slipper orchids. Orrin bears the distinction of introducing this term to the Garden-web Orchid forum.
Sign Up to comment
More Discussions
What is O.G.R.E.S. or Ogres?
O.G.R.E.S. is the Orchid Growers Rating and Evaluation...
Can I use rainwater, snow, aquarium water, or dehumidifier water on m
In short, yes. Rainwater or melted snow is excellent...
Light Requirements by Genus
Looking at your plants label, determine what its genus...
When should I pot a keiki?
Generally, the rule of thumb is to wait until the keiki's...
What is the sticky stuff on my orchid? What is Honeydew?
Sticky stuff (also known as Honeydew) is often found... | <urn:uuid:a9be86d1-43b4-494c-b514-c2bf4f5f4791> | 2 | 2.0625 | 0.790985 | en | 0.829663 | http://faq.gardenweb.com/discussions/2766331/what-is-a-paphead-or-paph-head |
Go Down
Topic: 2 PWM signals out of phase (Read 3895 times) previous topic - next topic
Dear Forum,
I am trying to mimic an optical rotary encoder with two digital outputs from an Arduino Uno. The mechanical encoder has two outputs (ChA and ChB) which produce 5V pulses (50% duty cycle) at a frequency related to the speed of the encoder. I haven't measured the output frequency accurately but the two channels can operate at over 1KHz, probably much higher. The direction of the encoder appears to be coded by which channel (A or B) goes high first. So, to mimic this behaviour, I need to stagger the two digital outputs by roughly 500 us or even less.
I can get this working to some degree by simply setting a pair of output pins on an arduino high for a period and then low with a delay to stagger the two outputs. This is fine for relatively slow speeds but to get the motor running at a fast speed, I need to send pulses faster than the arduino can manage with millisecond delays. If I reduce the stagger delay between the pulses too low, then the motor stops turning smoothly; presumably this is because of interrupts interrupting the output.
I have tried using PWM outputs instead since I believe these will remain at a given frequency regardless of interrupts. I have managed to change the frequency of the output to get it running at more than 1KHz. However, to get this to work, I need a delay between the PWM signals. So far I have just tried outputs from the same timer (timer 0; pins 5 and 6); A quick look with an oscilloscope (while at work a couple of days ago) made me think that the two PWM signals were precisely in phase even though there was a delay before starting them.
Is this expected? I guess it could be because the PWM signals are from the same timer. Do you think it is possible to get PWM signals out of phase from the same timer, or from outputs controlled by different timers? I don't have access to an oscilloscope to test it out so I hoped someone out there would be able to tell me if I am flogging a dead horse!
I also wonder if there is another approach to this problem.
Yes it is expected, you guessed right.
No it is not.
However you could get the timer to generate an interrupt. Then the interrupt service routing will toggle the pins in a quadrature fashion. You will need four interrupts for one quadrature cycle.
Alternatively you could always use microseconds delay.
You can get out-of-phase signals from the same timer if you put it in one of its PWM modes and configure the TCCRxA register for toggling on compare match.
For example, let's take Timer 1 (Section 16.11 of the ATmega328P datasheet). Configure it for CTC mode 12 (WGM1[3:0]=0b1100) so that the ICR1 register defines the TOP timer value, hence the frequency (PWM frequency will be 16 MHz/(ICR1+1)). Then, configure the TCCR1A register so that OC1A and OC1B both toggle on a compare match.
Now set OCR1A=ICR1-1, OCR1B=OCR1A/2. This means that once every timer period (from TCNT1=0 to TCNT1=ICR1) there will be a compare match with OCR1A and it will toggle, and approximately half a period later there will be a compare match with OCR1B and it will toggle.
Thank you,
I have a bit of reading to do to understand this but I will give it ago.
As I understand it you are looking for two signals in 'quadrature' (meaning 90 degrees out of phase, probably doesn't need to be exactly 90 degrees for most applications, but it is essential that the two signals not switch simultaneously as that would be 0 or 180 degrees out.)
The standard PWM modes on the timer allow the two pins controlled by that timer to switch with different duty cycles, with the fast mode this is accomplished by switching both pins together as the counter wraps round, then each switches back at its own point in the cycle. This means both switch simultaneously at the start of a pulse. (No good for quadrature)
With 'phase-correct' mode the counter counts up, then down, then up, and each pin switches at a fixed counter value, one way for counting up and switch the other way for counting down. This means the PWM signals overlap symmetrically, making them precisely in-phase.
In fact each output pin can have its sense inverted with configuration bits, so, for instance 180 degrees out of phase IS possible - however its still not quadrature.
So the method RuggedCircuits suggests is a neat trick to get quadrature out of the timer unit - new to me, sounds like the way to go. Study the relevant timer chapters in the datasheet for all the details. timer1 is the most general timer, so I'd start with that one.
[ I won't respond to messages, use the forum please ]
Sorry but PWM won't work to emulate an encoder. PWM has a fixed frequency and the on/off time ratio (DUTY CYCLE) determines the output average DC value. An encoder is a device that, when the encoder is rotating at a constant speed will have 2, 90 degree out of phase, signals of 50% duty cycle.
What you want to do is use 2 digital outputs, and 4 delays of equal value -
output 1 High,
output 2 High
output 1 Low
Output 2 low.
the delay is what determines the speed. Shorter the delays - the faster the speed, longer the delays - the slower the speed. the frequency changes with speed, not the duty cycle like PWM.
Go Up
Please enter a valid email to subscribe
Confirm your email address
We need to confirm your email address.
Thank you for subscribing!
via Egeo 16
Torino, 10131 | <urn:uuid:781f6b69-f8d3-4cf0-bfc8-264b5947e6cd> | 2 | 2.390625 | 0.119665 | en | 0.927007 | http://forum.arduino.cc/index.php?topic=104773.0;prev_next=prev |
Published in Notebooks
DaneElec digital pen
DCC 2007: Israeli technology helps lazy students
Dane-Elec was showing off their Digital Pen at DCC 2007. Honestly, I was a bit skeptical, but they were more than happy to demonstrate it in action, and for a good reason - the sample worked flawlessly.
The device uses 3d acoustic technology developed by Epos, an Israeli startup. Two microphones are used to triangulate the position of the pen. A similar concept can be used to detect the source of loud sounds at a distance, such as artillery or gunfire. I wonder if that's what the Israelis originally had in mind.
Simplicity was one of the design goals and judging from what we saw the device should be very easy to use. Basically you put it on top of a page, push a button and write. Foolproof, unless you're illiterate, but I guess you wouldn't be reading this if you were.
When you're done, just push the button again and you're off to the next page. The images are stored on 1GB of embedded memory which, according to the Dane-Elec guys, is sufficient for "thousands of pages".
The pen base station recharges when connected to a USB port, and the battery should endure as much as 120 hours before it drops dead.When plugged into a laptop the contraption works pretty much like any USB key, you just copy the scanned pages and that's it.
You will get some bundled OCR software as well, and it did a good job at the presentation. The base station is relatively small, maybe half the size of a contemporary candy bar cellphone, but keep in mind that this is an engineering sample and not the final product.
There are a few drawbacks though. You can't actually see what the device captured before getting the stuff on a computer, but if the gadget works properly that shouldn't be a problem. On the other hand, you also need a computer to send it.
If there was a way of connecting it to a smartphone, or if it used a memory card, you would be able to send your notes in no time at all. Now that would be nice for geek journalists. Hopefully, if the concept takes off, this will be taken care of.
Dane-Elec and Intel Capital invested more than a million bucks in this thing and the final product should be available at the end of August, although it was expected a bit earlier. The company plans to mass market it to students and professionals as two separate models priced at 99 euro.
You can also check out a video on YouTube,
Last modified on 03 July 2007
Rate this item
(0 votes) | <urn:uuid:c8ad105e-1260-4b73-89bf-742e76bc76df> | 2 | 1.976563 | 0.10379 | en | 0.963685 | http://fudzilla.com/home/item/18012-daneelec-digital-pen |
Take the 2-minute tour ×
In Terraria, the mud block seems a little useless, I read on the crappy Wikia page that Mud blocks can be used to grow mushrooms in some fashion.
From Mud on Wikia:
Mud is slightly darker than normal Dirt. It will sustain Underground Jungle grass as well as Mushroom Grass, but only at below-zero depths.
The largest single source of Mud in any given world is the Underground Jungle.
Does anyone know how to do this?
Should I plant mud blocks next to some green growth?
share|improve this question
Do you know you mentioned the 'Crappy Wikia' page and then linked to the 'Terraria Online' wiki instead? :) – James Jun 11 '11 at 6:22
@James: I'll change it, someone else changed the link, but not the text. Actually I'll just change the link back, altering the question in this manner makes the question less meaningful. Because part of the issue was that I didn't know there was a second wiki. – Mark Rogers Jun 11 '11 at 15:18
possible duplicate of How do I set up a Glowing Mushroom farm in Terraria? – Mr Smooth Jun 7 '12 at 17:38
1 Answer 1
up vote 8 down vote accepted
Mud is required to grow mushroom grass (which gives glowing mushrooms) and jungle grass (which mimics the underground jungle).
As long as your depth (as per a depth meter) is below "sealevel", both types of grass can be planted.
Here's an example mushroom farm:
enter image description here
For further instruction on how to best take advantage of mushroom farms like the above, see the wiki. (For what it's worth, the wiki you linked to was not the official Terraria wiki)
share|improve this answer
Hey thanks for the wiki link! – Mark Rogers Jun 11 '11 at 1:18
@Raven Nice screenshot! A silly question maybe, is their an item that allow to jump out of that holes? – Drake Jun 11 '11 at 10:44
@Drake - you can see the wooden platforms, which you can land on and jump again. If they weren't there, a grappling hook would work fine (or maybe the Red Balloon and Air in a Bottle). – Ian Pugsley Jun 11 '11 at 15:29
@kissaki - I'm not sure that would get you more mushrooms/block. More mushrooms total, sure, but I'm not sure there's an efficiency improvement. That said, I think the picture could be improved with taking out the center of each "+". giving a checkerboard-like appearance. – Raven Dreamer Jun 12 '11 at 0:20
This is a more optimized version of the above mushroom farm. img839.imageshack.us/img839/5641/mushroomfarm.png – James Jun 13 '11 at 16:42
Your Answer
| <urn:uuid:7e72bbff-3c60-45e2-80f6-253e3cdb19e3> | 2 | 1.640625 | 0.027332 | en | 0.91269 | http://gaming.stackexchange.com/questions/24328/how-can-i-make-a-glowing-mushroom-farm |
Take the 2-minute tour ×
I'm trying to locate the birth location and parents of "Thomas Tunin". I have done extensive Google searches but have so far ended up at a deadend on these particular details. Any suggestions on where to go from here?
Tunin, Thomas
b. 1836
d. 1912 Bartlesville, Washington, Oklahoma
Gender: Male
Spouse: Phocbus, Mary Almeda
b. 1838
d. 1897 Little Robe, Oakwood, Dewey, Oklahoma
Gender: Female
Tunin, Lura
Tunin, George D.
b. 1871
d. 1931 Little Robe, Oakwood, Dewey, Oklahoma
Gender: Male
Tunin, Wlbert
Gender: Male
Tunin, Bllomer
Gender: Male
Tunin, Willard Barton
Tunin, Martha Ann
Gender: Female
Source: http://www.okgenweb.org/~okahgp/f_250.htm#16
Thomas Tunin was born in 1839 and died in Bartlesville OK in 1912.
Thomas Tunin and Mary Almeda Phocbus were the parents of four sons: Elbert, Bloomer, Willard, and George and also two daughters; Martha Ann and Lura Almeda. The children were all born in Illinois.
They moved to Kansas in 1878 and then to Indian Territory which became Bartlesville OK in 1885. They moved to Dewey County about 1895 homesteading in Sec. 25-18-15. The son, George and daughter Lura came with them.
They came by covered wagons, bringing a few cows, chickens and horses. They cooked on a camp fire under a cottonwood tree and slept in the two wagons until a dugout could be made. They had many hardships as Mrs. Tunin was sick and in bed at the time. She died in 1897. They lived with the John Lutton family on a part time basis then.
Source: http://archiver.rootsweb.ancestry.com/th/read/OKDEWEY/1998-10/0909541518, cites Dewey County (Oklahoma) Historical Society, Spanning the River" "3 vols, particularly v1 (1976), "Oakwood Area of Dewey County ..."
Thomas Tunin Grave Stone
Source: http://www.okgenweb.org/~okdewey/cem/harrison/thomastunin.html
The above shows death in 1908 though?
share|improve this question
Your question could be improved by giving an account of the sources for the information you are presenting. That would help those answering the questions to understand what you have already looked at, and would help them make more informed suggestions. – Gene Golovchinsky Dec 6 '12 at 19:11
Added some links, is this what you where referring to? – Jerry Tunin Dec 6 '12 at 19:15
yes, this is on the right track. Basically, it helps to know not just what you found (although this is important!) but where you looked and what kinds of things you searched for. – Gene Golovchinsky Dec 6 '12 at 21:31
I suggest that now you take a look at the various databases available in FamilySearch (and Ancestry.com if you have access to that) using the names you found on the okgenweb site. The idea is to reduce the likelihood that someone made a mistake in interpreting the often patchy and conflicting data we have from that time. The more truly independent sources (as opposed to copies or re-statements of the same source) you can find, the more certain you can be that the info is correct. – Gene Golovchinsky Dec 6 '12 at 21:34
Thomas was the son of Robert and Hannah Angeline Ratliff Tuning. – Jerry Tunin Dec 7 '12 at 17:58
1 Answer 1
up vote 5 down vote accepted
We've all confronted the circumstance where it is a struggle to confirm something we located that seems unsourced information about an ancestor.
1. In this case, the narrative passage reads less like "unsourced work" and more like treasured family tradition (or, better yet, a summary of what could be more substantial tradition). I would work the material as though it was an artifact--conduct research to learn its provenance. Since the passage was published by the Dewey County (Oklahoma) Historical Society, I'd start there. [See Note 1]
2. The family tree data looks much like but is not identical to the narrative/tradition information. If I understand correctly, the family tree data information came from the "Oklahoma Genealogy Database." According to the introduction to the database, it contains information from "cemetery records, Social Security Death Index, obituaries, NWOGS 'Key Finder,' newspapers, 'Woodward County Pioneer Families Before 1915', 'Woodward County Pioneer Families 1915-1957', 'Our Ellis County Heritage 1885-1974' Vol. I, 'Our Ellis County Heritage 1885-1974' Vol II, & other sources available at Woodward Library, contributions by researchers, and my own [?Donna Dreyer] family research." I would contact the database owner (?Donna Dreyer and the Northwest Oklahoma Genealogical Society) to learn if she/they have a vertical file on the family and/or to inquire if she/they are able to identify the specific sources that contributed to the Tunin family tree data.
3. The Thomas Tunin tombstone photograph is part of a collection, "Little Robe Cemetery [aka Harrison Cemetery]," Oklahoma GenWEb; the webpage reports this material was last edited in 2008. Also on the webpage index, I find tombstone photographs about (a) Almeda Tunin (seems Thomas' wife); (b) George D. Tunin; (c) Charley Tunin; (d) Jennie Tunin; (e) Mary Almeda Tunin; (e) Tunin children. The owner/creator of this Robie Cemetery page seems Susan Bradford, for whom an email address is reported in the 2008 page. Were I in your position, I would contact Susan Bradford to learn more details about the cemetery and/or photograph collection. Also to learn if she can make higher resolution photographs of the stones available to you. [See Note 1] You might separately contact the town to learn the history of the cemetery, extant cemetery records and whether or not a cemetery map exists.
The three points above are all examples of working "from the known to the unknown."
Jerry wrote, "I'm trying to locate the birth location and parents of 'Thomas Tunin'."
From the information provided, Thomas Tunin was born in the 1830s. There are not many birth records available from that period in the US, but as they say, "it's all local." Even if a birth record does not exist, it is quite possible records exist by which you are able to prove a birth date or approximate date and parentage.
1. Use the clues from the family tradition to develop census information about Thomas Tunin. Among other data, census records from 1850 forward provide clues about birth location and age. For example, what I believe to be your Thomas Tunin's entry in the 1900 U.S. census reports he was born July 1836 at Illinois, to a father born New York and a mother born Indiana. You should view the record and confirm this information. All information is subject to error, so you want to develop an array of information from other census (and from other record groups). Take care to check each census place for other "Tunin" entries and make notes about those you find (often times we learn later those are records about parents and/or siblings). Review census entries carefully--each may provide unique clues to other marriages or more distant relations.
2. Determine if a death record is available for Thomas Tunin. According to "Oklahoma Death Record Information Online," the "Vital Statistics section of the [Oklahoma] Department of Health" has death "records dating back to around 1908." Whether or not state has a record Thomas' death, check next with the county where he was buried (Dewey) to learn if that jurisdiction maintained death records and has a record.
3. Learn about other record availability. Researching about the census (and from other information) you will gather clues about where Thomas lived (and when). There may be records about him in any/all of those places. These might be land/deed records, probate records, tax records, court records, records about his children ... A good way to learn about the records that are available in each location is to search the FamilySearch Catalog for the "place."
4. Research the family group. Develop information about all of his children and his wife (or wives). This means tracing them in the census, learning about their births, marriages, deaths, obituaries and other information that will provide insight into their lives. You are presumably descended of this Thomas (but maybe not????), so that his life and those of all his children and his spouse(s) influenced your direct ancestors in some way. Learning about the whole family group will help you build a better, more complete picture of this Thomas Tunin.
Good luck in your quest.
NOTE 1: When you post information to the internet, you are "publishing" or "distributing" something. Facts, which include represented names, dates and places are generally considered outside the bounds of US copyright law. That exclusion does not generally apply to narrated material and other creative works, such as photographs. Your question includes information that is quite likely subject to copyright--the apparent extended quote from Spanning the River and re-posted tombstone image. You should always try learn the copyright rules that apply to material you didn't develop yourself. Personally, I use a basic rule of thumb about genealogy and copyright. In a nutshell, without permission, I try to quote less three sentences or the textual equivalent from material dated post-1923. Also, without permission, I want to link to (rather than repost) images that are not in the public domain.
While you may have permission to post this various information or some other reason to believe it is in the public domain, it was not obvious to me.
Please help us keep Genealogy.SE free of criticism and complaint by revisiting your question and either making adjustments or disclosing permissions.
share|improve this answer
Thanks, I've located addition info. – Jerry Tunin Dec 7 '12 at 17:58
Your Answer
| <urn:uuid:4eb994f2-303b-4271-8a57-209b2664581c> | 2 | 1.640625 | 0.045718 | en | 0.944513 | http://genealogy.stackexchange.com/questions/2524/locating-additional-details-on-thomas-tunin-b-1836-d-1912 |
• Keyboard Shortcuts
from Genius – Contributor Guidelines on Genius
Left/Right arrows
These can be used to go between annotations quicker. Left takes you to the previous annotation, while right will take you to the next one
Shift + L
Pressing Shift + L will allow you to edit the page you’re on (assuming you are capable of doing so, i.e., you have enough IQ or you’re an editor).
Note: you cannot edit a “locked” text/song if you’re not an editor/moderator of the site
Shift + Enter
This has many uses, but what it does is basically saves any edits you’ve made to text. It can be used:
Ctrl + Shift + M
For Macs: Control + Shift + M
OG users of RG remember when the combination was Shift + Command + Space, but this will play/pause the player as long as there is audio to be played/paused
As long as this shows up at the bottom of the page:
Tab + Enter
Tab + Enter lets you send a message or forum post more quickly. Enter your text and press Tab + Enter, so it will be sent immediately.
This is extremely useful, if you have written a long text and you don’t have to scroll down to the “send” button.
Attention: If you have inserted links in your text, you need to press Tab multiple times, until the cursor is on the “send” button and press Enter after that.
Ctrl + Shift + N (in Chrome) or Ctrl + Shift + P (in Firefox)
For Macs: Command + Shift + N (or P)
This is a very useful shortcut if you want to view Rap Genius how a non-user does, or if you’re trying to see how a certain Rap Genius search ranks in Google. Pressing this combination of buttons in Chrome or Firefox will open up a window “incognito”, which presents you as a new user to sites like Rap Genius. Incognito also deletes all of your viewing history in an incognito window.
Ctrl + F
For Macs: Command + F
This trick can be used when you’re searching for a certain word/phrase in a song
Mainly useful for when adding a new text. Pressing Tab switches between fields (e.g., from the Primary Artist field to the Song Title field), while pressing Shift + Tab does the opposite (goes from the Song Title field to the Primary Artist field)
Ctrl + T
For Macs: Command + T
This one is more for general Internet usage. To create a new tab press Command + T. This is useful when you’re halfway through typing an annotation and you want to remember that one interview you saw where the artist had said that one thing you’re writing. You can press Command + T to open a new tab and search for it!
| <urn:uuid:58d563c4-76fa-40ae-add4-9305b80f9231> | 2 | 1.539063 | 0.108529 | en | 0.880433 | http://genius.com/3251932/Genius-contributor-guidelines/Keyboard-shortcuts |
Catch & release - Danish inshore fishing - - Global FlyFisher
GFF logo
Catch & release
Danish inshore fishing
By Martin Joergensen
<<< Previous page (Proper conduct) Next page (Rules & Regulations) >>>
Not endangered
The fish in the Danish sea are in no way endangered by rodfishers. Nets are another story, though, but still fish are abundant, and therefore we Danes almost always bring home fish. Small fish are illegal to catch, but many fishermen release a lot of their catch. But no-kill and pure C&R is not common on the Danish shores.
This is the way salmon fisher 'Backwater' Bob Boudreau looks at C&R:
What a great analogy. Remember this when you fight a fish you intend to release.
C&R for life
I've seen many fish caught and released on the Danish coasts. And they were not handled equally gentle every time. Let's recap a bit of good advice for C&R:
1. If you positively intend to release the fish, fish single barbless hooks.
2. If you intend to release the fish or it seems to be too small already at distance, get it in quickly to stress it as little as possible
3. If you're in doubt about the size, it's too small
4. Decide as early as possible if the fish will be released. Don't net it and judge it in there
5. You might want to wade into shallow or less turbulent water to land the fish there under better control
6. Try not to net fish that are going to be released. Take them with your hands if possible
7. Don't squeeze the fish. Trout are especially voulnerable, and it's very easy to accidently squeeze the air out of its swim bladder
8. Keep the fish as much as possible in the water, if possible unhooking it while it's still submerged
9. Loosen the hook without grabbing the fish if possible. Let your hand slide down the line, grab the hook and try to loosen it
10. Don't use tools unless it's absolutely necessary. Locking forceps can of great help, though
11. Pictures should be taken instantly and with the fish as little out of the water as possible
12. If the fish has to come out of the water, support it with one or two full hands to avoid unnecessary harm
13. Never lift a fish by the tail or gills if it is to be released
14. Let the fish swim away by itself. Don't throw it or splash it into the water. Hold it with a full hand or two and bring it gently under the surface
15. If there's current, hold the fish until it revives. Don't let it tumble downstream
Kill it!
Some people actually kill their fish so let's also look at how a fish should be killed, cleaned and stored
1. Fish that will be killed should be netted with a sufficiently large net
2. When the fish is in the net, grab it through the net to secure it
3. Kill the fish before you remove the hook or take it out of the net
4. Don't wade ashore and bring out the fish before killing it
5. Use a proper priest, not any stone or branch
6. Strike the fish several hard blows on the skull just above the eyes while holding it firmly under the gills with the other hand
7. Big fish don't die easy
8. Smaller fish can be kept in a ring or piece of string by the side. In that way you don't need to wade ashore, but can keep fishing
9. It's best to clean the fish immediately, but if it's kept cold, cleaning can wait some hours
10. Cover the fish with wet sea weed, grass or leaves to keep it wet and cold
11. Don't keep the fish in plastic bags (for too long at least). The lack of oxygen will make the fish a pretty sorry sight
12. Do yourself, nature and the fish the honor of eating it if you kill it. Don't drop it in the garbage after showing it off.
The preist
The priest is not only the guy by the altar on sundays, but also an instrument devised to kill fish with. It normally consists of a handle and a heavy metal head. It's made for the purpose and very efficient. If you want to kill fish regularly, get hold of a priest.
If you whack your fish over the head, you'll want to know what to do with it.
Here we have some Danish recipes.
Want to comment this page? Fill out the form below.
Only comments
in English
are accepted!
Comentarios en Ingles
solamente, por favor!
Your name Your email
You don't have to comment to start or stop notifications.
All comments will be screened by the GFF staff before publication.
And only English language comments will be published.
Name and email is optional but recommended.
You can see other public comments on this page | <urn:uuid:dfe1b142-19ba-41c5-ae28-64d182c57921> | 2 | 1.992188 | 0.092965 | en | 0.936248 | http://globalflyfisher.com/global/denmark/catch-and-release.htm |
The Hitchhiker's Guide Project
A giant spaceborne supercomputer built by the Silastic Armorfiends of Striterax. It was the first to be built like a natural brain, in that every cellular particle of it carried the pattern of the whole within it, which enabled it to think more flexibly and imaginatively, and also, it seemed, to be shocked.
The Silastic Armorfiends of Striterax were engaged in one of their numerous bloody conflicts, not enjoying it at all, decided that enough was enough, and ordered Hactar to design for them an Ultimate Weapon. "What do you mean," asked Hactar, "by Ultimate?"
So Hactar designed an Ultimate Weapon. It was a very, very small bomb that was simply a junction box in hyperspace which would, when activated, connect the heart of every major sun with the heart of every other major sun simultaneously and thus turn the entire Universe into one gigantic hyperspatial supernova.
When the Silastic Armorfiends tried to use it to blow up a Strangulous Stilletan munitions dump in one of the Gamma Caves, they were extremely irritated that it didn't work, and said so.
The Silastic Armorfiends disagreed and pulverized the computer.
Later, they thought better of it, and destroyed the faulty bomb as well.
Hactar, however, survived in particle form, with enough of his abilities left to enable the supernova bomb to be built by the people of Krikkit, for the express purpose of destroying the universe. Fortunately, thanks to the supremely bad bowling skills of Arthur Dent, combined with a newfound ability to fly, the bomb was never detonated and thus, the Universe was saved. | <urn:uuid:ac824060-995b-4fae-9bb4-4d3065618c56> | 2 | 2.328125 | 0.306949 | en | 0.979123 | http://hhgproject.org/entries/hactar.html |
Our Toys
click to enlarge photos
Bessie Figure
Bessie Coleman (1892-1926)
Fearless Fly Girl
A daredevil of her generation, Bessie Coleman soared above barriers of her time to dazzle audiences and become a fearless aviator of the skies. One of 13 children, Coleman was born in 1892 in Atlanta, Texas. When she was only two years old, her family moved to the farm town of Waxahacie, Texas where she grew up picking cotton and helping her mother do laundry for customers.
Racial discrimination in the south, drove Bessie’s father, who was Native American from Texas back to his home in Oklahoma, where Native Americans enjoyed full civil rights. Her father hoped to build a better life for his family there, but Bessie’s mother remained in Texas with her children.
Coleman went to work as a laundress, hoping to make enough money to attend college. Even though local schools closed during the cotton picking season forcing children to work during the harvest, Coleman was determined to get an education. She borrowed books from a traveling library and learned enough to graduate from high school. In 1910, she attended Colored Agricultural and Normal University (now known as Langston University), but was forced to drop out after one semester because she ran out of money.
Frustrated that she had to return to her job as a laundress, Coleman moved to Chicago in 1915 to join her two brothers. Vowing to never work again as a maid or laundress, Bessie attended beauty school and worked as a men’s manicurist at the White Sox Barber Shop. Coleman became known as the best and fastest manicurist in Chicago, but had higher aspirations.
Her brother John, who fought in World War I, teased Bessie that French women were superior to Chicago women because they could fly planes. John told his sister she would never be able to fly a plane like the French girls. Coleman set out to prove her brother wrong. She applied to flight schools throughout the country, but was rejected because she was Black and a woman.
Undaunted in her pursuit to become an aviator, Coleman heeded the advice of her friend Robert Abbott, publisher of the Black newspaper the Chicago Defender, who urged her to move to France where racism was not as prevalent to earn her pilot’s license. She began taking French lessons and took a higher paying job as a chili restaurant manager to save money for her move.
In 1920, Coleman sailed for France with her savings and donations from Abbott and other wealthy sponsors. She learned to fly using a French Nieuport plane. After only seven months of taking flying lessons at the Caudron Brother’s School of Aviation in Le Crotoy, France, Bessie became the first Black woman to obtain a license from Federation Aeronautique Internationale.
Upon returning to the states in 1921, Bessie quickly realized she needed to learn how to barnstorm and become an aerial daredevil if she ever wanted to make a living as a pilot. So, in 1922 she returned to Europe, learning barnstorming techniques in France and Germany.
Transforming herself into a celebrity of the skies, Bessie flew her first air show on September 3, 1922 at Glenn Curtiss Field in Garden City, New York. Sponsored by the Chicago Defender, the show billed Coleman as “the world’s greatest woman flyer.” She immediately became a star, thrilling audiences with dips and dives such as barrel rolls and loop the loops and unknowingly inspired other Black women to fly.
While preparing for a show in Los Angeles in 1923, Coleman had her first accident. The plane stalled, crashed and knocked Coleman unconscious. She suffered a broken leg and cracked ribs. After taking a year to recover, Bessie regained her fearless spirit and threw her flying into high-gear, touring the country giving exhibitions, flight lessons and lectures. On June 19, 1925, she returned to her home state of Texas to fly over Houston's Aerial Transport Field to celebrate the anniversary of the day Blacks in Texas had achieved emancipation from slavery ("Juneteenth").
Following the Houston show, Bessie returned to Waxahachie to fly in an event. Just like other cities in the segregated south, Whites and Blacks attending the event were required to sit in separate areas. At this event, officials even demanded attendees enter the airfield using separate White and Black only entrances. Throughout her life, Coleman tried to use her celebrity to break racial barriers. Coleman refused to perform unless all attendees entered through one gate. After labored negotiations, Coleman prevailed—guests entered through one gate and the show went on.
Sadly, while preparing for a show in Jacksonville, Florida in 1926, her career came to a tragic end. Flying with her mechanic William Wills, Bessie wasn’t wearing her seatbelt so she could lean over out of her seat to scout potential parachute landing spots for the show. Suddenly, the plane nosedived, throwing Coleman from the plane to her death and killing Wills upon impact. Investigators discovered a loose wrench had jammed the plane’s instruments causing Wills to lose control of the plane.
While Coleman never realized her dream of establishing her own flying school, she had a tremendous impact on aviation history and inspired many Black Americans. After her death Bessie Coleman Aero Clubs cropped up throughout the country. In 1977, the Bessie Coleman Aviators Club was founded by Black women pilots for women of all races.
Bessie’s accomplishments in the skies most likely inspired the Tuskegee Airmen to take flight. In 1941, the US Army Air Force began a program to train Black men as military pilots. Nearly 1000 men completed flight training and many went on to fight in World War II. Between May, 1943 and June 9, 1945, the Tuskegee Airmen compiled an enviable record-- none of the bombers they escorted was lost to enemy fighters, they destroyed 251 enemy aircraft and won more than 850 medals.
Recognizing the courage and contribution Coleman made to American history, in 1990 Chicago Mayor Richard Daley renamed Old Mannheim Road at O’Hare Airport Bessie Coleman Drive. In 1995, the US Postal Service issued a stamp in honor of her life. Ever focused on achieving her goals, Bessie never let the barriers of segregation and racism defeat her ambition of becoming a successful pilot.
In honor of Bessie’s great achievement, HIA Toys is proud to offer our Fearless Fly Girl action figure.
About | FAQs | Contact Us | Login | Order Tracking | Shop | View Cart
Copyright © Titus Venture Group, LLC, All Rights Reserved. | <urn:uuid:6d027efe-cf9d-4797-97d0-b30023731f00> | 3 | 3.1875 | 0.018419 | en | 0.969273 | http://hiatoys.com/bessie.html |
News Release Archive:
News Release 527 of 973
December 19, 2002 09:00 AM (EST)
News Release Number: STScI-2002-16
A Tiny Galaxy is Born
The full news release story:
A Tiny Galaxy is BornView this image
The Hubble images of POX 186 support theories that all galaxies originally formed through the assembly of smaller "building blocks" of gas and stars. These galactic building blocks formed shortly after the Big Bang, the event that created the universe. Astronomers Michael Corbin of the Space Telescope Science Institute in Baltimore, Md., and William Vacca of the Max-Planck Institute for Extraterrestrial Physics in Garching, Germany, used the telescope's Wide Field and Planetary Camera 2 to study POX 186 in March and June 2000. Their results will appear in the Dec. 20 issue of the Astrophysical Journal.
"This is a surprising find," Corbin says. "We didn't expect to see any galaxies forming in the nearby universe. POX 186 lies only about 68 million light-years away, which means that it is relatively close to us in both space and time."
Adds Vacca: "POX 186 may be giving us a glimpse of the early stages of the formation process of all galaxies."
POX 186 is a member of a class of galaxies called blue compact dwarfs because of its small size and its collection of hot blue stars. [The term "POX" is derived from the French "prism objectif," or objective prism, a device that astronomers place in front of a telescope to photograph spectra of all objects in its field of view.] POX 186 was discovered 20 years ago, but ground-based telescopes resolved few details of the galaxy's structure because it is so tiny. To probe the galaxy's complex structure, astronomers used the sharp vision of the Hubble telescope. The Hubble pictures reveal that the system is puny by galaxy standards, measuring only about 900 light-years across, and containing just 10 million stars. By contrast, our Milky Way is about 100,000 light-years across and contains more than 100 billion stars.
So why did POX 186 lag behind its larger galactic cousins in forming? Corbin and Vacca find that the young system sits in a region of comparatively empty space known as a void. Its closest galactic neighbors are about 30 million light-years away. The two small clumps of gas and stars that are merging to form POX 186 would have taken longer to be drawn together by gravity than similar clumps in denser regions of space. The Hubble data don't reveal the ages of the stars in the clumps. Corbin, however, suspects that the oldest stars may be about 1 billion years old, which is young on the cosmic time scale.
The youthful galaxy's puny size may support a recent theory of galaxy formation known as "downsizing," which proposes that the least massive galaxies in the universe are the last to form. In clear contrast to POX 186, the most massive galaxies in the universe, known as giant ellipticals, have a generally spherical structure with few or no young stars, indicating that they formed many billions of years in the past. To actually see the formation process of stars in such large galaxies, astronomers are awaiting the deployment of Hubble's successor, the James Webb Space Telescope. This telescope is designed, in part, to study faint objects whose light left them early in the 13-billion-year history of the universe.
Although the POX 186 results are tantalizing, Corbin and Vacca realize that one galaxy is not enough evidence to support the idea that galaxy formation is an ongoing process. They are proposing to use Hubble to study nine other blue compact dwarfs for similar evidence of recent formation.
Donna Weaver
Space Telescope Science Institute, Baltimore, MD
Michael Corbin
Space Telescope Science Institute, Baltimore, MD
(Phone: 410/338-5001; E-mail: | <urn:uuid:70034e66-85c8-4b08-b6bf-c5734a1c4ba9> | 4 | 3.5625 | 0.04034 | en | 0.940908 | http://hubblesite.org/newscenter/archive/releases/2002/16/text/ |
« IELTS Speaking: full test | Main | IELTS Advice: test your vocabulary range »
Saturday, December 14, 2013
Dear Simon,
I rewrote it already and posted it in Wednesday's lesson common there. Can you give some comments? Please.
Your sincerely,
Just one thing about your Wednesday essay. You have mentioned something about why people have to agree with the essay proposal question.Then you contradicted that view in a very outstanding argumentative sentence
Do you think if we can write like yours approach to such kind of argument in exam , may give a good impression of our standards in formal writing to the examiner
Hi Jay,
Thanks for sharing your essay - you obviously 'looked closely' and did the hard work that I suggested. Despite this, I'm afraid I can't give you any feedback. If I did this for one person, many others would expect the same help, and it wouldn't be fair if I said no to them. Keep up the good work, and sorry I can't offer any feedback.
Hi Sulaiman,
Yes, it's quite a nice approach, and the examiner would be impressed if you did this well. However, it's not the only approach - I don't think I've used that technique in any of my other essays here on the site.
Dear Simon,
there are some essay types where required to answer the following question:
...Do the advantages outweigh the disadvantages?
I know that we have to discuss both advantages and disadvantages and mention it in introduction that I think advantages outweigh disadvantages. But what if I think that advantages and disadvantages are equal in importance?
Dear Simon,
It is ok, Thanks for replying me.
Your sincerely,
Hi Simon,
I took a close look at the essay and have a question. I felt that the UK example in your third paragraph doesn't support your topic sentence in that paragraph very well. I suppose the example would be something about people made a decision to go somewhere else because of different money they need to pay. But the UK example is just the consequence of such policy.
Would you explain to me a bit.
Thank you very much.
Dear Sir,
It is one of the greatest essays from you. and I read it again and again to understand each points.
I have a question for you, how I do write sentences with using different grammar rule. Do you have any ideas for that?
HI Simon
I believe your writing has following structural errors .
* In your introduction, the topic sentence has no supports for your opinion, instead it is only a rephrasing of the title.
* In your second paragraph, you started an argument which is not explained or even asked to answer, therefore, it is irrelevant to the question. This part of your writing was a good choice, if the question would:
” Tourist have to pay more for some cultural and historic attractions because they don’t contribute on some cost of those place while local residents do, To what extent do you agree or disagree with this opinion.”
* In the third paragraph your example is irrelevant to your main idea in topic sentence. If we consider paying different price is the main reason for tourists avoidance to visit a country, you should bring an example or a support why they are reluctant. In fact in this part you brought consequence of not to come instead of why they do not coming.
I see this writing as a good example of an essay which has a good vocabulary and grammar without answering the main question, and I think that was the reason you put it for us.
It would be great if I have your feedback soon, as I am preparing just for my writing.
Hi Nika,
It's fine to argue that the advantages and disadvantages are equal. Just make that opinion clear in the introduction, then explain it in the rest of the essay.
Hi Raveen,
I'm glad the essay helped you. I don't recommend thinking about grammar rules while you're writing a task 2 essay. My advice is to focus on writing a good answer, so the development of good ideas is more important. In terms of grammar, just try to write as accurately as you can (i.e. avoid making mistakes).
Hi Leo and Aria,
Well done for 'looking closely'. You both make some good points (although I'm going to try to stand up for the essay!).
Starting with Aria's point about the introduction, I don't agree that we have a problem there. We don't need to think about 'topic sentences' or supporting arguments at this stage. Just keep the introduction short and simple: introduce the essay topic (paraphrasing the question is an easy way to do this), then give your overall answer. You don't need more than this.
I also disagree with Aria's second point - it's perfectly acceptable to imagine what the opposite argument would be (i.e. why foreign tourists might be expected to pay more) and then refute it. However, it is true that I don't usually do this - I normally stick to writing a topic sentence that fits my argument, then I explain and support it. I think I was experimenting a bit with this paragraph, and my students found it useful to see an example of 'refuting' (I wrote the essay with my students in a lesson).
You both make the point about my example in the third paragraph. I agree that the coherence could be 'tighter', and we could improve the way that the example is explained if we had more time. However, the paragraph is easily good enough for the purposes of an IELTS test, and the example is fine (although perhaps not perfect).
Finally, the essay does answer the main question Aria. I make it clear what my view is, I refute the opposite view, and I give some reasons for my own opinion. That's what the examiner will see. Perhaps I've encouraged you to look too closely and made you over-think your analysis!
thanks a lot Simon
It is very useful to have these kind of advices from an examiner.
As I used to learn some rules and structures from different sources, I thought we need to follow only those rules. the way that you mentioned we have more choices and will be more flexible to write.
Now, I have the feeling of being in your class.
No problem Aria. I'm glad you found my explanation useful.
Sir do you think the listening of gen. Exam easy as compare to Ac exams please reply me if any one have any idea about this
I want help about digram and picture graph because i'm going to give the exam
Verify your Comment
Previewing your Comment
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment
Having trouble reading this image? View an alternate.
Post a comment
Your Information
| <urn:uuid:8c32956e-2f92-444f-ae88-2bb46a9c9a03> | 2 | 1.65625 | 0.12382 | en | 0.965491 | http://ielts-simon.com/ielts-help-and-english-pr/2013/12/ielts-advice-how-closely-did-you-look.html |
Minority Report: The Non-White Gamer's Experience
Fergus Mills searches for the words. It's clear he wants to say this carefully. The 22-year-old from Macon, Ga. is black. His Xbox Live avatar is black. Except that it's not.
Drawing it out of him, Mills says it's because of the avatar's body language. And while Mills doesn't say that's really a white guy on his screen, palette-swapped to look like him, he's pretty clear this representation is not from his neighborhood.
"I can make him look like me, but have you noticed, when he's standing right there, the way he moves? It's ... weird," Mills said. "He puts his hand on his hip. He twirls his head. I've never seen people who act like that."
It's a little thing and the discussion moves on. But it is evocative of just how conscious one becomes of these differences, during a life spent playing as characters who look nothing like you.
And in matters ranging from avatar creation and character representation to the marketing and affordability of games, non-white gamers' experiences speak of a video games community that is, at best, insensitive to their membership in it, sometimes to the point of obliviousness.
Kotaku sought out several non-white gamers, some of whom also write about their experiences, to discuss what being an African-American or Hispanic gamer means. In an American games industry dominated, marketed to and consumed mostly by white males, discussions of race and class can quickly hit a wall, blocked by insistence that the subject is inappropriate for a pursuit that should be colorblind in basis. Ideally, yes, it should. But race matters — it always will — in a different way for video games.
Recognizably You
Rafael Sanchez is 23, lives in West Covina, Calif. and has enrolled in graduate school to get a master's degree in computer science. He wants to go to work in game development. If he does, Rafael would be among the 2.5 percent of developers who are Hispanic, according to an International Game Developers Association survey of its membership. A similar percentage of "recognizably Hispanic" characters can be found in video games, according to a study released recently.
Minority Report: The Non-White Gamer's Experience
Sanchez considers this matter from a game design perspective. "Looking at the casts of fighting games, it really is the only genre where you get a diverse cast," said Sanchez, who writes on the blog Latino Gamer. Many of them begin with a small cast, he said. "As each grows, the initial token, it's a black guy that's thrown in - Eddy Gordo in Tekken, or Zack in Dead or Alive. You usually see the black person first, because they make the most obvious contrast to the white characters on the roster.
Because a "recognizably Hispanic" man is difficult to reduce to visual cues such as black or white skin, "it's harder for [game developers] to think of how to include us," Sanchez says. "And when they do, they can't think of any way to do so other than stereotypes of Mexican wrestlers."
He doesn't say any of this bitterly. "I don't think there's anything malicious behind it; you write what you know," Sanchez explained. "If the game developers and writers are largely white people, I can't really expect them to understand my reality."
The same IGDA survey said its development community is 83 percent white. Blacks comprise 2 percent. Asians make up 7.5 percent, but in a sector with such a strong history across the Pacific, the issue of their representation is notably different from that of black and Latino characters.
Mills, the gamer from Georgia, is resigned to the reality that the characters he plays, reads in comic books and sees on television at best represent him in the values they carry, rather than what they look like. Mills' brother Reginald, nine years older, loved comic books, and parked Fergus in front of the television when the cartoons came on, indoctrinating him to Batman's continuity. Bruce Wayne's upbringing made him "almost like a role model."
"You become so used to it," Mills said. "You turn on the TV, the main character is white. Play a game, the main character is white…You don't think about the underlying meaning of it. It's just what's going on. People really do think of it as the norm; you make a character, he's going to be white."
Why should any of this matter? Dmitri Williams, an assistant professor at the University of Southern California's Annenberg School for Communication, who conducted the study of demographic representation in video games released last month, argues that they represent a market opportunity for publishers.
"If we could get past the issue of racism and think market dynamics, if I'm a young Latino kid, I'll probably be more interested in a game if it has Latino characters," Williams said. "The strong backlash people have is: This is a political correctness issue, and ‘I'm being told how to think and feel,' and ‘I'm being told I'm a racist.' None of that is necessary. You can just look at the numbers and see that some groups should be showing up, in games, in greater numbers."
He points to the cultural impact a generation earlier, when black characters began appearing on television in meaningful roles.
"Any time someone from an under-represented group made that first appearance, it was a big deal for that group," Williams said. "Bill Cosby starring in ‘I Spy' (in 1965), that was a real breakthrough role for African-American actors [on TV]. And it led to whites and African-Americans thinking of themselves in new ways. The simple presence of a group is important."
But if minority gamers represent a market opportunity, game publishers seem slow to pursue it. In fact, another aspect in which non-white gamers feel excluded is in the marketing. If games are pitched or made with their interests or lifestyles in mind, they feel it's usually the next sports title.
"I walk into a GameStop, and they probably think I'm there to buy NBA 2K9 or Madden," Mills said. For the record, his favorite game is Metal Gear Solid 4. He prefers action/adventure games.
Minority Report: The Non-White Gamer's Experience
Gary Swaby, 23, a Briton of black Caribbean ancestry, living in Luton, England, believes that marketing reinforces, more than anything else, the image of gaming as a predominantly, if not exclusively, white activity. "They're definitely trying to market to the masses, and the white families would be their biggest audience," Swaby said. "Most white people are probably in a better financial space than black families, or those of other cultures, and that would mean they're the market [publishers] are going after. I can't remember seeing a Wii commercial with a black family. Blacks are assumed to be poor. That's definitely an issue that can't be ignored." Swaby said he spends between 400 or 500 pounds ($660-$830) annually on games.
Sanchez, while not endorsing stereotype, does find some truth in his own experience as a Hispanic gamer with not much of a disposable income for games. "I walk into a GameStop, I go straight to the used PS2 rack," he says. With tuition for California State-Los Angeles coming due, the games he's writing about on his site, lately, are older, cheaper games. "If I'm talking to someone with more money, and I mention the last game I reviewed, he'll ask why I'm talking about that instead of some $50 or $60 game. I'm straightforward. These are the games I can afford right now.
Minority Report: The Non-White Gamer's Experience
"When someone has more money, they are able to be more lighthearted about these things," Sanchez continued. Those of us who can't afford the $50 as easily, we put a lot more thought into our purchases. Before I got my Wii, I had been thinking about it for months. [A friend] was very surprised by how much thought I had put into it."
What could be marketed more to Hispanic gamers? "Well, racing games," said Andreas Almodovar, 28, of Oldsmar, Fla. "We love getting into the car industry, love customizing our cars. I think the gaming industry, like [with] Midnight Club and Need for Speed, have tapped into something. I just wish they would take it a bit further."
The Importance of Being Louis
The Koalition, a site dedicated to the interests of the urban or hip-hop gamer, as they put it, was just cited as the best tech blog by the Black Weblog Awards. Swaby and Mills are contributors. A.B. Frasier, 23, of Newark N.J. is its managing editor, and he says the site was created in part to introduce and expose African-Americans to other types of games, since the community is largely seen as sticking with sports and shooter titles.
But his site's efforts can only go so far. "A lot of kids play games, and I could sit up here and try to introduce these games for the black community, but the truth is it still has to appeal to them. And I think a black character does that," Frasier said. "But it has to be done in a way that everybody can accept."
A good example? Frasier picks Louis from Left 4 Dead. Louis is a black protagonist and a playable character who participates in a way that is not conspicuously or stereotypically "black." He wears a tie. He looks like he stumbled out of the office to start blasting at zombies. Frasier says he even saw Left 4 Dead advertisements on hiphop sites, and says the game has very strong uptake in the black community.
Minority Report: The Non-White Gamer's Experience
"Valve really did a great job putting a black character in their game," Frasier said. "Not every black guy speaks like Cole Train [in the Gears of War series.]"
Hardwiring a minority character into a game, without stereotype, is a powerful statement, above any game that allows customizable avatars of any ethnicity. As Williams, the researcher, sums it up, "People are probably not going to opt in and say, ‘I've got my squad, but I really need a black guy. I really need a Hispanic guy on it.' They're probably going to create guys who look like themselves."
Game character diversity is not just an issue about the interests of non-whites but about the effect it has on white gamers. Williams brings up the subject of "mainstreaming," something highly debated in communication science. Basically, the theory holds that watching enough images starts to move one's perception toward what they see in the images. Williams, who has studied video games for 10 years and calls himself a hardcore gamer, did a study early in his career that showed that, after playing a game, people said they thought the game world they'd visited was more like their real world. "That's a cultivation effect, and it happens," Williams said. "There's no reason to think it wouldn't happen with race as well."
So the upshot there: The more a white gamer — or a gamer of any ethnicity, frankly — spends time in a homogeneous environment, the cues about race and ethnicity sent by games become even more important. Especially if they're the only or the predominant mass medium being consumed. "Imagine a Latino kid, who lives in an all-Latino neighborhood," Williams said. "If they were only exposed to images of white people through the media, those images will probably have a bigger impact. Contrast that with a Latino who lives in a diverse neighborhood who interacts with white kids all the time. The images from the games won't matter as much."
Walking in Someone's Shoes
Asked what they'd like to see most, all the non-white gamers I talked to have their preferences. Almodovar would love to see Hispanic characters in the Battlefield 2 series and why not? The U.S. military's Hispanic population has grown steadily over the past decade.
Frasier? "Why can't a guy like Hip Hop Gamer be in G4? One 30-minute show, would it really hurt that much?" Such programming would go a long ways to inclusion, he feels. Sanchez, a role-playing game enthusiast, "would love it if there was a Square-produced RPG that had a brown protagonist."
Swaby wants to know "why can't we make a game with a black character, and market it to everybody?" Of course, Grand Theft Auto: San Andreas stands as the most notable effort in this regard. The game also is five years old.
But what they don't want more of is pretending that race somehow is not an issue, when it is one in every other mass medium in this multicultural society. The consumption of white-dominated mass media by a diverse consumer base is a legitimate, serious topic.
And if games belong to that equation when the discussion is about their artistic value, or their economic impact or cultural relevance, then they also belong in the discussion of the consumption of white-dominated, high-demand mass media by a broadly diverse consumer base. Holding up one's hand to declare it's not an issue will not make it go away.
"It's because a lot of people haven't been taught it's important," said Frasier, speaking of race and the history of race problems. "A lot of people playing games now are young, and brought up in areas where everybody gets along, so I don't think they see the problem. You have to live the life in the shoes of a person of color to understand where they are coming from."
For certain, he's lived enough lives in the shoes of a white character. | <urn:uuid:bce116d9-bc43-4267-9bf1-bfcd03af0786> | 2 | 1.765625 | 0.090178 | en | 0.978279 | http://kotaku.com/5358562/minority-report-the-non-white-gamers-experience?tag=african-american |
Slideshow Iconslideshows MORE
Essential Rules of Parenting: Discipline Do's and Dont's
2 of 7
Be Consistent
When I was a kid, you could answer my mom back one day and she'd laugh and tell you she was pleased you could stand up for yourself. Next day, you could say the same thing and get walloped for it. And there was never any clue to which way she'd go. This applied not only to giving her back talk, but to most other things, too. It meant I spent a lot of my time walking on eggshells.
It also meant I had no idea what was and wasn't allowed -- it seemed to be decided on some kind of secret lottery basis that I wasn't privy to. So there was little point in regulating my behavior. After all, I might get into trouble, but then again I might not. It generally seemed worth the risk -- certainly to me.
Your kids are just the same. They need to know what is and isn't acceptable. And they judge that by what was and wasn't okay yesterday and the day before. If they're not getting a consistent message, they're clueless as to how they have to behave, and those all important boundaries aren't being properly maintained. That means the kids feel confused, insecure, and perhaps even unloved.
I'll tell you the toughest thing about this Rule: It means that a lot of the time, you can't break the rules even when you want to. It's just not fair on the kids. If you've decided that you don't allow the kids to sleep in your bed with you, you have to stick to it (unless you're prepared to change the rule permanently). Just because your little one was a bit sad about something today, and they're so warm and snuggly and smelling of bathtime, and you're feeling a bit down yourself anyway…no, no, no! Stop right there! Let them into your bed once and it will be ten times harder to say no to them next time, and they won't understand why. Say no now (softly and with an extra hug) and you're only being cruel to be kind (to yourself as well as them).
Next: Focus on the Problem, Not the Person
More on: Discipline Strategies
To order this book go to Amazon.
10 Best Creative Apps for Kids
Kindergarten Readiness App Wins Gold
A New Intergalactic Reading Adventure!
Find Today's Newest & Best Children's Books!
stay connected
Enter your email address to sign up or manage your account.
Facebook icon Twitter icon Follow Us on Pinterest
editor’s picks | <urn:uuid:f4b9ff24-4cff-4cc8-b149-3cfe3e995923> | 2 | 2.28125 | 0.064073 | en | 0.958473 | http://life.familyeducation.com/slideshow/discipline/64782.html?page=2 |
Share this story
Close X
Switch to Desktop Site
Page 1 of 1
About these ads
It has long been a popular argument among campaigners for reform of America's marijuana laws that legalization would strike a major blow against the violent Mexican drug gangs that have brought so much misery to parts of that country and, increasingly, along the US border.
Fairly typical of the tone of such reporting was a nice piece for the Monitor earlier this month by Sara Miller Llana, titled "Biggest blow to Mexican drug cartels? It could be on your state ballot."
The piece summarizes a paper from a Mexican think tank that argued legalization in any of the three US states considering legalizing recreational use of the drug – Oregon, Washington, and Colorado – could do major damage to organized crime south of the border:
Well, yes, it could. But with Washington and Colorado now having passed their measures (voters rejected legalization in Oregon), the theory of "more legal pot = less drug violence in Mexico" is about to be put to the test of experience, with a whole host of assumptions made about its salutary effect coming up against facts.
Color me skeptical. While any student of American history knows that Prohibition creates the opportunity for big profits for criminal syndicates, and violence always follows that, the prediction of a big hit in the cartel pocketbook relies on a set of uncertain assumptions: That marijuana production in Washington and Colorado will surge; that this additional supply, without the expense and danger of crossing an international border, will be cheaper and bleed out into the 48 other states, displacing Mexican imports; and that the malign influence of drug gangs on Mexican society will therefore be reduced.
The Mexican think tank estimates that $6 billion a year is derived from marijuana exports from Mexico. Is this estimate accurate? Hard to say. It's not as if we can crunch the numbers from excise tax rolls. But let's assume that's a fairly accurate picture – what portion does that represent of cartel income? Well, nobody knows.
About these ads
The US Justice Department has estimated that drug shipments from Mexico are worth a total of $18 billion to $39 billion a year, a staggeringly wide range that shows how hard it is to quantify the economics of the overall trade. Is the $6 billion assumption one-third of overall illegal drug shipments? Or is it one-sixth? Or some other number entirely?
Then there are the assumptions of what legalization will cost the drug gangs. The think tank suggests that roughly $2.7 billion in cartel income will be lost as a result of legalization in Colorado and Washington, as new legal production comes on line. But Washington is already one of the top 10 producers of marijuana in the US (as is Oregon).
While surely some additional acreage will come on line in response to legalization, the Feds will be watching closely for evidence that Washington state's marijuana is flooding its neighbors, and growers will still face the risk of seizure of property under federal laws by the Drug Enforcement Administration (DEA). Any businessman thinking about a major marijuana operation in Washington or Colorado, particularly one that will rely on markets where the drug is still deemed illegal, will think long and hard about how much capital to risk. The Obama administration has been fairly aggressive in going after major pot businesses in states that already have legalized medical marijuana.
Finally, there is the fact that cocaine and heroin are far more profitable ounce for ounce for drug traffickers than marijuana. A kilogram of Mexican pot wholesales for about $1,200 in the US. Meanwhile, drug gangs are thought to buy a kilogram of cocaine in South America for about $2,000, and the wholesale value of that kilogram is about $30,000 by the time it makes it to the other side of the Rio Grande (and ends up retailing for as much as $100,000). There are enormous expenses in transporting the drugs compared to legal goods, what with bribes, violence, "taxes" charged by other gangs to cross their territory, the loss of product to seizures, and the fuss of smuggling across the border.
But within that 15-fold markup, there's a lot of pure profit, surely enough that it could make sense for Mexican drug gangs to try to make up for lost revenue with a volume strategy: Cut their profit margins on the US side of the border to stimulate demand, and increase overall profits (potentially leading to an increase in the use of a far more dangerous drug). And a kilogram is still a kilogram. Moving a high-value good per weight makes a lot more sense than moving a low-value one, when the risk of seizure and prosecution is about the same.
Don't get me wrong. I dearly hope that lives are saved, in Mexico and the US, because of the current, uncertain legalization experiments about to begin in Washington and Colorado. A total end to marijuana prohibition in the US would kill stone-dead illegal marijuana imports, much as it killed illegal liquor imports from Canada after the 18th Amendment was repealed in 1933.
But the US is a long, long way from that. And the thirst for illegal profits is never slaked. Sadly, the grim toll of Mexico's war with drug gangs (with an estimated 55,000 people killed in the last six years) is likely to lurch on.
Follow Stories Like This
Get the Monitor stories you care about delivered to your inbox. | <urn:uuid:0c6e1687-6747-4da1-8394-4bd177a45c3c> | 2 | 1.835938 | 0.143289 | en | 0.955723 | http://m.csmonitor.com/World/Security-Watch/Backchannels/2012/1108/Does-legal-marijuana-in-the-US-really-mean-trouble-for-Mexican-drug-cartels/(page)/2 |
[Numpy-discussion] setting decimal accuracy in array operations (scikits.timeseries)
Marco Tuckner marcotuckner@public-files...
Wed Mar 3 16:23:59 CST 2010
Thanks to all who answered.
This is really helpful!
>> If you are still seeing actual calculation differences, we will
>> need to see a complete, self-contained example that demonstrates
>> the difference.
> To add a bit more detail -- unless you are explicitly specifying
> single precision floats (dtype=float32), then both numpy and excel
> are using doubles -- so that's not the source of the differences.
> Even if you are using single precision in numpy, It's pretty rare for
> that to make a significant difference. Something else is going on.
> I suspect a different algorithm, you can tell timeseries.convert how
> you want it to interpolate -- who knows what excel is doing.
I checked the values row by row comparing Excel against the Python results.
The the values of both programs match perfectly at the data points where
no periodic sequence occurs:
so those values where the aggregated value results in a straight value
(e.g. 12.04) the results were the same.
At values points where the result was a periodic sequence (e.g.
12.222222 ...) the described difference could be observed.
I will try to create a self contained example tomorrow.
Thanks a lot and kind regards,
More information about the NumPy-Discussion mailing list | <urn:uuid:9180a5ae-5822-4b91-996a-9d0ec56a9794> | 2 | 2.25 | 0.029698 | en | 0.847655 | http://mail.scipy.org/pipermail/numpy-discussion/2010-March/049138.html |
Take the 2-minute tour ×
Given a function $ g $ entire on the whole complex plane $ C $, it is possible to find an entire function $f $ such that $ f(z+1) -f(z)=g(z) $. The proof can be given using riemann surface,automorphy,covering,etc. Can anyone find a elementary proof which avoids all such things.
share|improve this question
Section 6.3 in [Berenstein and Gay: Complex analysis and special topics in harmonic analysis MR Number=(1344448)] deals with that problem. – Narutaka OZAWA May 29 '14 at 7:49
@NarutakaOZAWA: I liked the reference in the earlier comment that you just deleted. Did you just replace it because it was very old and in French? I think it would be nice for you to give both references. – Neil Strickland May 29 '14 at 7:53
Thank you. I just don't know how to edit. numdam.org/item?id=ASENS_1887_3_4__361_0 – Narutaka OZAWA May 29 '14 at 9:37
Is this the same as mathoverflow.net/questions/4434 ? – David Speyer May 29 '14 at 19:06
1 Answer 1
up vote 7 down vote accepted
Let $L$ be your difference operator: $(Lf)(z)=f(z+1)-f(z)$. Consider these polynomials $$P_n(z)=\frac{1}{n!}z(z-1)\ldots(z-n+1),\quad n=0,1,2,\ldots.$$ Simple computation shows that $LP_n=P_{n-1}$. Polynomials $P_n$ make a basis in the space of all polynmials, because there is one polynomial of each degree. This allows you to find a solution of any equation with polynomial RHS. Then perform a limit process. For the details see any book under the title Calculus of finite differences. For example, by N\"orlund or by Gelfond.
share|improve this answer
fantastic! i was expecting an answer from you. – Koushik May 29 '14 at 8:00
Your Answer
| <urn:uuid:4db0a4a0-e195-421d-9016-cad24c496c0b> | 2 | 1.789063 | 0.080515 | en | 0.871241 | http://mathoverflow.net/questions/168501/searching-for-an-elementary-proof-a-complex-analysis-result |
Take the 2-minute tour ×
Let $f(x,y)$ define a surface $S$ in $\mathbb{R}^3$ with a unique local minimum at $b \in S$. Suppose gradient descent from any start point $a \in S$ follows a geodesic on $S$ from $a$ to $b$. (Q1.) What is the class of functions/surfaces whose gradient-descent paths are geodesics?
Certainly if $S$ is a surface of revolution about a $z$-vertical line through $b$, its "meridians" are geodesics, and these would be the paths followed by gradient descent down to $b$. So the class of surfaces includes surfaces of revolution. But surely it is wider than that?
(Q2.) One could ask the same question about paths followed by Newton's method, which in general are different from gradient-descent paths, as indicated in this Wikipedia image:
Newton's vs. Gradient Gradient descent: green. Newton's method: red.
(Q3.) These questions make sense in arbitrary dimensions, although my primary interest is for surfaces in $\mathbb{R}^3$.
Any ideas on how to formulate my question as constraints on $f(\;)$, or pointers to relevant literature, would be appreciated. Thanks!
share|improve this question
Let me add a fourth side to the question. (Q4.) Let $h:{\mathbb R}\rightarrow{\mathbb R}$ be smooth and increasing. Let $f$ be a function as in (Q1). Is $h\circ f$ such a function too ? – Denis Serre Oct 18 '10 at 13:55
@Denis: I see your motivation. Excellent question! – Joseph O'Rourke Oct 18 '10 at 14:21
2 Answers 2
For (Q1). The tangent space of $S$ is generated by the gradient flow vector field $v = (|\nabla f|^2, \nabla f)$ and the tangents to the level sets $w= (0, \nabla^\perp f)$. The geodesic constraint can be imposed as the condition "no sideways acceleration", which means that $[(\nabla f \cdot \nabla )v] \cdot w = 0$. This implies that $\nabla^2_{ij} f \nabla^if \nabla^{(\perp)j}f = 0$. In other words, the eigendirections of the Hessian of $f$ must be $\nabla f$ and its orthogonal, or that $\nabla f$ is parallel to $\nabla |\nabla f|^2$. So this means that $f$ and $|\nabla f|^2$ share the same level sets. (This same characterization is valid for any dimension; so also answers (Q3). )
In particular, this answers Denis Serre's (Q4) in the positive.
share|improve this answer
@Willie: That is a beautiful characterization, that $f$ and the square gradient share the same level sets! I do not yet see what this implies in terms of the global geometric shape of $f$, but it certainly is a succinct encapsulation. Thanks! – Joseph O'Rourke Oct 18 '10 at 18:23
Two functions share the same level sets iff the values of each of them only depend on the values of the other. So Willie's condition writes as an eikonal equation $|\nabla f(x)|^2=h(f(x)),$ and I guess that solutions are all of the form $f(x)=g(\mathbb{dist}(x,C)),$ at least locally ($g$ and $h$ being related by $1/h= (g^{-1})^')$. Here $\mathbb{dist}(x,C)$ is the point-set Euclidean distance from $C$. – Pietro Majer Oct 18 '10 at 19:18
@Pietro: And $C$ is ... ? – Joseph O'Rourke Oct 18 '10 at 19:21
(I guess) $C$ could be any subset, even not smooth, for the distance function from $C$ is 1-Lipschitz, thus differentiable a.e., and $|\nabla d_C|=1$, so in any case one gets a solution. A smooth, convex $C$ should give solutions defined everywhere. Note that $C$ a point gives the surfaces of revolutions you mentioned in the questions. – Pietro Majer Oct 18 '10 at 19:31
(sorry, I changed notation: $d_C(x)=\mathbb{dist}(x,C)$). Another way to characterize the functions $f$ should be, that sublevel sets {f<c} are uniform neighborhoods of $C$ (or of any sublevel set {f<b}, with b<c). – Pietro Majer Oct 18 '10 at 19:46
Here is a function $f(x,y)$ which is 0 inside the square $C=[\pm1,\pm1]$, and outside that square has value equal to the Euclidean distance $d( p, C )$ from $p=(x,y)$ to the boundary of $C$. [I am trying to follow Pietro's suggestion, as far as I understand it.] It is not a surface of revolution (but it is centrally symmetric). Are its gradient descent paths geodesics? I think so...
Function, Contours
Left above: $f(x,y)$. Right above: Level sets of $f$. Below: $\nabla f$.
And here (below) is a closeup of the function defined using squared distance $[d( p, C )]^2$, as per Will's suggestion:
alt text
share|improve this answer
Hmmm, like Jeopardy, :) your answer is in the form of a question... I think they just might be, but now in this case, there is no point that is a unique minimum. Everything within the square $C$ is the minimum. So what is your gradient descending to? The closest point in $C$ to your starting point? So you've got 4 voronoi cells defined by (0.5,0) (0,0.5) (-0.5,0) and (0,-0.5) for E/N/W/S directions, which delineate the four regions closest to the 4 edges of $C$,except for any point within $C$ in which case its gradient descent is the point itself. – sleepless in beantown Oct 19 '10 at 0:53
@sleepless: Yes, no unique local min, but still one can define descent paths for all points exterior to the central square. Your description is accurate, including answer$=$question! – Joseph O'Rourke Oct 19 '10 at 1:02
Or, you've got 8 regions, with the 4 quadrants translated (+/-0.5, +/-0.5) in the xy plane with gradient descents for any point in those planes mapping to the corners of the square $C$, e.g. (x≥0.5,y≥0.5)→(+0.5,+0.5), and the other three quadrants where you swap in less-than signs for -0.5 values all having gradient descents as lines going to the corners of the square $C$; and the regions for x > 0.5 and $-0.5 \le y \le 0.5$ having a gradient descent to $(y,0.5)$, and three other similar regions mapping to the edges of the region $C$ – sleepless in beantown Oct 19 '10 at 1:02
So the answer in this case, is a definite YES, the gradient descents for this figure for any point starting outside of $C$ are definitely geodesics. I'd call this figure a circle fractured into quadrants intercalated by strips of the plane defined by a square at the center of the fractured circle quadrants. (instead of drawing the contours of the distance, draw the gradient of the function and you'll see it clearly.) – sleepless in beantown Oct 19 '10 at 1:09
By the way, this function certainly is rotationally symmetric about the $z$-axis with 4-fold symmetry. – sleepless in beantown Oct 19 '10 at 1:24
Your Answer
| <urn:uuid:bc9567d6-aebe-452d-8143-a5a4f0a04720> | 2 | 1.851563 | 0.144607 | en | 0.873104 | http://mathoverflow.net/questions/42617/functions-whose-gradient-descent-paths-are-geodesics/42640 |
ABO blood group systemmethod the classification of classifying human blood based on the basis of the inherited properties of red blood cells (erythrocytes) as determined by their possession the presence or lack absence of the antigens A and B, which are carried on the surface of the red cells. Persons may thus have type A, type B, type O, or type AB blood. The A, B, and O blood groups were first identified by Austrian immunologist Karl Landsteiner in 1901. See blood group.
Blood containing red cells with type A antigen on their surface has in its serum (fluid) antibodies against type B red cells. If, in transfusion, type B blood is injected into persons with type A blood, the red cells in the injected blood will be destroyed by the antibodies in the recipient’s blood. In the same way, type A red cells will be destroyed by anti-A antibodies in type B blood. Type O blood can be injected into persons with type A, B, or O blood unless there is incompatibility with respect to some other blood group system also present. Persons with type AB blood can receive type A, B, or O blood, as shown in the table.
Blood group O is the most common blood type throughout the world, particularly among peoples of South and Central America. Type B is prevalent in Asia, especially in northern India. Type A also is common all over the world; the highest frequency is among the Blackfoot Indians of Montana and in the Sami people of northern Scandinavia.
The ABO antigens are developed well before birth and remain throughout life. Children acquire ABO antibodies passively from their mother before birth, but by three months of age infants are making their own—it own; it is believed that the stimulus for such antibody formation is from contact with ABO-like antigenic substances in nature. Erythroblastosis ABO incompatibility, in which the antigens of a mother and her fetus are different enough to cause an immune reaction, occurs in a small number of pregnancies. Rarely, ABO incompatibility may give rise to erythroblastosis fetalis (hemolytic disease of the newborn) is , a type of anemia in which the red blood cells of the fetus are destroyed by the maternal immune system because of a blood group incompatibility between the fetus and its mother, particularly in matings where the mother . This situation occurs most often when a mother is type O and the father her fetus is either type A or type B. | <urn:uuid:feeb4c24-7eb5-46ff-9e5f-4697974c69b9> | 4 | 3.78125 | 0.222269 | en | 0.950794 | http://media-3.web.britannica.com/eb-diffs/429/1429-17077-3372.html |
New flavour tool from Symrise opens up opportunities in product development
A- A+
Number four flavour house Symrise builds a tool that brings new opportunities in product development for food technologists through a stronger understanding of the complex nature of foodstuffs, writes Lindsey Partos.
The firm has developed a high-temperature liquid chromatography method known as "LC Taste" that allows researchers to separate aroma chemicals and flavouring components from solutions, using a non-toxic blend of solvents.
Essentially, a splitter device links the mobile phase of the liquid chromotography to human taste buds.
An online panel is directly connected to the liquid chromotography, and able to give a comment about the taste - bitter, sweet, pungent et al - of food materials under mechanical analysis.
"By understanding the presence of undesirable 'notes', this new technology will give food developers the chance to search and screen compounds that they want to avoid in their final food product," says Dr. Gerhard Krammer, senior vice president of flavour innovation at the German firm Symrise.
Dr. Krammer quotes the example of soy based products. "People don't want the bitter note: our technology can identify it in the compound, thereby enabling the food technologist to do something about it, by avoiding or masking the undesirable note," he tells
Once food engineers know the compound, they can be quite flexible with the formulation, he adds.
Immediately tasting isolated components makes it possible to evaluate olfactory (aroma), retronasal (through the mouth to the nose) trigeminal (spicy, warming, cooling et al) and taste characteristics.
The LC Taste user can recognise key flavouring substances such as vanilla and maltol, as well as substances such as bittering agents, amino acids / peptides, sucrose, flavour enhancers, sugar and capsaicinoids.
The new technique is currently being used for dairy, beverage and culinary applications.
Food technologists currently use laborious fractionation methods to improve their understanding of the non-volatile aspects of food. Dr. Krammer suggests his firm's new technology, that "doesn't use additional solvents but uses high temperatures," is far less time-consuming.
"This method enables food scientist to see the blueprint of food," adds Dr. Krammer.
Symrise claims this new method can be used for product development in a wide range of foods, including beverages, yoghurts, prepared meals and savoury snacks.
Foods contain substances that act upon the senses, such as volatile aroma components as well as both volatile and non-volatile flavouring substances.
These compounds convey key sensory impressions by stimulating the roughly 5 million olfactory cells in the nose and/or the taste buds on our tongues.
In addition to olfactory stimulation (smells), these impressions mainly include gustatory perception (tastes), such as sweet, savory, acidic, bitter and umami (from the Japanese word for flavorful).
Trigeminal perceptions such as spicy, warming, cooling or tingly round out the overall impression, which is what determines whether a food tastes good to us or fails to meet our expectations.
Depending on even the most subtle structural differences in aroma and flavoring compounds, food may taste "home-made" or may fail to give us any real pleasure.
Chromatography can be used to separate flavouring substances, which can then be evaluated for their sensory characteristics after a certain period of time. Gas chromatography/olfactometry (also called olfactory GC) is used to separate aroma/flavouring substances, which can then be inhaled in the carrier gas and surrounding air and evaluated by smell.
High-performance liquid chromatography (HPLC) allows researchers to analyse mixtures of substances in solution, but the components isolated cannot be tasted directly, because the mobile phases typically used are toxic and have to be removed using complex separation procedures.
Toxic substances can be removed from flavourings using thermal processes or extractions, but these methods subject the flavouring compounds to extreme stress and can significantly alter them.
High-temperature liquid chromatography (HTLC), an offshoot of HPLC, is more effective, says Symrise. HTLC can be performed using non-toxic solvent mixtures, thereby eliminating the need for complicated purification steps that can affect the flavour constituents.
Pure water and/or aqueous mixtures can be used as solvents; additional components, such as oils, fats, ethanol, physiologically tolerable salts such as sodium chloride, and acids such as phosphoric acid can be added to the mixture depending on the application.
The Holzminden-based firm has issued a patent for its 'LC Taste' technique.
Private equity owned Symrise, formed in a 2002 merger between flavour firms Haarmann & Reimer and Dragoco, has just headed into the second round of bidding for the food chemicals arm of German group Degussa.
Related topics: Science | <urn:uuid:e1085ff9-45b9-4a90-a79d-6f87c4cdedec> | 2 | 2.03125 | 0.073601 | en | 0.924561 | http://mobile.foodnavigator.com/Science/New-flavour-tool-from-Symrise-opens-up-opportunities-in-product-development |
Joyce Beatrice in the US
1. #11,741,837 Joyce Baze
2. #11,741,838 Joyce Baziuk
3. #11,741,839 Joyce Beagan
4. #11,741,840 Joyce Beams
5. #11,741,841 Joyce Beatrice
6. #11,741,842 Joyce Beaudion
7. #11,741,843 Joyce Beauford
8. #11,741,844 Joyce Bechard
9. #11,741,845 Joyce Beckermann
people in the U.S. have this name View Joyce Beatrice on WhitePages Raquote
Meaning & Origins
98th in the U.S.
Italian and French (Béatrice): from a medieval female personal name borne in honor of a 4th-century saint, martyred together with her brothers Simplicius and Faustinus. Her name was originally Viātrix meaning ‘traveler’ (a feminine form of viātor, from via ‘way’), a name adopted by early Christians in reference to the journey through life, and Christ's description of himself as ‘the way, the life, and the truth’; it was later altered as a result of folk etymological association with Latin beatus ‘blessed’.
20,499th in the U.S.
Nicknames & variations
Top state populations | <urn:uuid:d8d0801a-6b40-42c2-ae93-3128869f9281> | 2 | 2.09375 | 0.202895 | en | 0.959815 | http://names.whitepages.com/Joyce/Beatrice |
Sunday, 19 May 2013
WELCOME TO DELAREYVILLE
In July 1976,I wrote a 2- part article in 'Deepika" about the apartheid white minority rule
in S.Africa,particularly referring to the June 1976 student uprising in Soweto,in
which about 600 school children were killed by the police.What began as a protest
against the imposition of Afrikaans as the medium of instruction in black schools,
turned into a mass uprising.The uprising was suppressed with brute force,but it
drew the attention of the international community to the oppression in South Africa,
whereby sanctions against South Africa were intensified,and the regime was more
and more isolated.The Soweto uprising was triggered not only by the imposition
of Afrikaans,but also by inferior education for black children.In short,the event
speeded up the demise of apartheid.
At that time,I was in Kenya.Very little information was available about S.Africa
due to its isolation.I collected the information from BBC radio and Newsweek.
It didn'd figure in my wildest imagination that I would come to this country and
spend 25 years here.Sometimes,life is an exhibition or Mela,where you have the
opportunity to see and enjoy more than what you initially expected,where more
and more doors are opened in front of you,automatically.
In the 1970s Nigeria attracted thousands of expatriates;teachers,doctors,engineers
etc etc,to beef up manpower for the implementation of their massive development
programme,funded with newly-found oil wealth.That programme didn't produce the
desired effect due to corruption and mismanagement.For some time,Nigeria was
described as the 'paradise of teachers'.Thousands of Malayalees got contracts there
as teachers,and benefitted from the generous terms.I was in Nigeria from 1981 to
1987.By 1986,the Nigerian economy collapsed, and the expatriates fled,in droves.
Coinciding with the collapse of the Nigerian economy,there were hushed talks,
in our circles,about a new "promised land'',South Africa.Indian citizens were banned
from travelling to South Africa,as a protest against apartheid.Then,how was it
possible to enter South Africa? Some geographical absurdities came to our rescue.
In the heart of South Africa,there is a tiny country called Lesotho,which looks like
the yellow in a fried egg! In South Africa,the apartheid system had produced some
'independent'homelands for blacks,which had their own self-governments.These
governments were allowed to employ Indian teachers and doctors.They gave the
visa on paper,not in the passport.So,an Indian national would buy an airticket to
travel to Lesotho.Some got employment there,and later shifted to the homelands.
This is how most of us landed in South Africa,even before apartheid was dismantled
My original destination was Transkei,where my brother,Devasiachen was already
settled.On 7th January 1988,I landed in Jan Smuts Airport (now Oliver Tambo
International Airport).The airport was tiny,and almost deserted ,due to international
boycott of South Africa.I took a flight to Transkei.The schools were already opened.
My arrival there coincided with a small 'coup' there,and new appointments were
put on hold.I had to look for a job elsewhere.I travelled to the homeland of Boputhatswana
where the famous Sun City is located.
My brother lived in a small town called Mt.Frere.While staying there,I accompanied
him for shopping at a major town called Kokstad,about 100 kms away.I was very
surprised to see the up-to-date facilities and cleanliness of the town.It looked like
Europe or America transplanted into Africa. Compared with South Africa,Nigeria
was centuries behind.For instance,in Nigeria,we used to go to a Sunday market at
Michika,20 Kms from Shuwa,in Northern Nigeria,where we lived.The conditions
there were primitive.It was an open market,where things were displayed and
sold in a primitive way,without proper weights and measures.For example,meat
pieces were placed on a dirty wooden table,with flies swarming on them.The weight
and price were mere speculation.Rice was measured in 'mudus' or some vessels.
But here in South Africa,the supermarkets abounded with high quality products,
well packed,weights and price clearly indicated,and hygienically handled.
On 23 February 1988,I arrived in Delareyville,about 1000 Kms away.This is what
I saw.....
The town is small but beautiful,with wide ,clean roads.It looks like a small town
in a Clint Eastwood movie.All the shops,offices,banks,Post Office,Magistrate
Courts,police station etc are within walking distance.There isn't even a bus ticket
thrown on the road or on the sidewalks.This doesn't seem to be a country with a
conflict.Dustbins are placed in every street corner,where people deposit waste.
Most of the people are black.There's a predominance of Afrikaans.Even if a shop
owner knows some English,he/she won't talk to you in English,as if there's a ban
on it.I don't see any Indian anywhere.There are some Take Aways where you can
get a variety of food items such as chips,fish,fried chicken etc at a reasonable price.
Coca Cola and a variety of juices are available.Apples,bananas, pears ,grapes etc
are also available.
I make friends with a fruitseller.He knows a bit of English.From him I get
information about my destination,Atamelang, a small black township 25 kms
away. He showed me the area where taxis are parked.There's a spacious bus
station too.There are no whites in the taxi stand or in the bus station.Perhaps
it's a disgrace for whites to travel by bus or taxi.
During the apartheid era,the white minority regime enacted a law,called the
Group Areas Act, whereby the cream areas of the country were reserved for
white settlement ( kannaya sthalangal ),and the thirikida or barren areas were
allocated to blacks,coloureds and Indians.Blacks were allowed to come to the
cities during working hours,to work for the whites as domestic workers,gardeners,
shop assistants,unskilled laboureres etc,and were required to go back to their
townships after work.No black persons were supposed to be seen in the towns
in the night.Blacks had to carry a pass all the time;otherwise they would be
arrested and thrown in jail.The blacks were treated like foreigners in their own
country.More than 80% of the land was owned by whites who were only 20% of
the population.The white regime didn't encourage black education,because they
needed cheap labour.Their attitude to black education was :" ee kochinu ee
kashayam mathi."
Delareyville had many satelite townships and villages,within a 60 kms radius,
which supplied cheap labour.Atamelang was such a township,cleverly established
at a safe distance of 25 Kms,to keep blacks out as far as possible.Atamelang was a
part of the homeland of Boputhatswana,which was not approved by the ANC.It
might sound absurd as a republic within a republic,but we have our own absurdities,
eg Mahe,which is geographically a part of Kerala,but is a part of Pondichery!
The area's education offices were located at Atamelang.There were about 10
Malayalees already settled there.I boarded a bus to Atamelang.There was no
conductor in the bus.The driver was driver/conductor.The bus was up to date,
with automatic door.The seats were soft and comfortable.The whites had deployed
nice buses because they didn't want any late-arrival for work due to a breakdown
of the bus!The bus roared past vast farms of maize and sunflower.
( to be continued ) | <urn:uuid:7f2ebb79-3881-4e15-b2f6-3e021acb01b6> | 2 | 1.71875 | 0.044025 | en | 0.947357 | http://narithhokkil.blogspot.com/2013/05/welcome-to-delareyville-sharing.html |
New Internationalist
Nuclear weapons: a history
June 2008
‘The explosive force of nuclear fission has changed everything except our modes of thinking and thus we drift towards unparalleled catastrophe. We shall require an entirely new pattern of thinking if humankind is to survive.’ Albert Einstein, 1946
The Manhattan Project
Scientific breakthroughs in the 1930s made atomic bomb production possible. Fearing the prospect of Hitler developing nuclear weapons, top physicists from around the world joined the secret ‘Manhattan Project’ to develop them first. Unprecedented funding came from the US. When Germany surrendered in May 1945, the Manhattan Project had not yet developed a working weapon. Many scientists lobbied for their research to be turned to peaceful purposes. But US President Harry Truman saw the advantage of possessing the bomb ahead of the Soviet Union, and ordered the first test in July, resulting in the mightiest explosion humanity had ever witnessed.
Survivor: Nagasaki bomb victim Sumiteru Taniguchi looks at a photo of himself taken in 1945. His horrific burns have required 17 operations.
Truman immediately decided to use this awesome weapon to attack Japan, with which the Allies were still at war. Officially, this was to force the stubborn Japanese leadership to capitulate. In fact, Japan was already seeking a negotiated surrender. It seems likely that the US nuked Japan to show the world that it had a unique and devastating weapon and was prepared to use it.
On 6 August 1945, a bomb known as ‘Little Boy’ was dropped on Hiroshima. Resident Dr Shuntaro Hida was visiting a patient outside the city at the time: ‘My whole heart trembled at what I saw. There was a great fire ring floating over the city. Within a moment, a massive deep white cloud grew out of the centre and a long black cloud spread over the entire width of the city, the beginning of an enormous storm created by the blast. I decided I had to return as soon as possible. I looked at the road before me. Denuded, burnt and bloody, numberless survivors were in my path; some crawling on their knees or on all fours, some stood with difficulty or leant on another’s shoulder. No-one showed any sign that helped me to recognize him or her as a human being. The cruellest sight was the number of raw bodies that lay one upon the other. Although the road was already packed with victims, the terribly wounded, bloody and burnt kept crawling in. They had become a pile of flesh.’
After shock
‘About a week after the bombing unusual symptoms began to appear in the survivors,’ remembers Dr Shuntaro Hida. ‘When patients raised their hands to their heads while struggling with pain, their hair would fall out. Experiencing severe symptoms of fever, throat pain, bleeding and depilation, the survivors fell into a dangerous condition within an hour of the onset. Very few escaped death. Our patients were dying from a bomb which could kill them long after the blast.’ The total number of deaths in the first hours was 75,000, but many more died within a week from acute radiation poisoning. By December 1945, 140,000 were dead, and by the end of 1950, 200,000.
Three days later, the US dropped a second bomb – nicknamed ‘Fat Man’ – on Nagasaki. Around 40,000 died immediately, rising to 140,000 by the end of 1950. Truman promised to eliminate Japanese cities one by one in a ‘rain of ruin’. Japan surrendered on 15 August, on the same conditions it had asked for before the bombings.
Photo: paul lowe / panos
Test victim: an abandoned baby in Semipalatansk, Russia’s nuclear test site. Over a million people in the region have been contaminated with radiation from over 500 bomb explosions. Photo: paul lowe / panos
The H-bomb
Moscow had obtained information from spies involved with the Manhattan Project. After the War, it took the Soviets only four years to produce their first fission bomb. Truman retaliated with a crash programme to develop a weapon thousands of times more powerful again: the ‘hydrogen’ or thermonuclear bomb. Although many scientists objected, their concerns were ignored. The US tested its first fusion bomb (code-named ‘Mike’) in 1952. More than 450 times the power of the Nagasaki bomb, it obliterated Elugelab atoll in the Marshall Islands. Not to be outdone, the Soviet Union exploded its first thermonuclear device in August 1953.
Jellyfish babies
MAD world
Throughout the 1950s the US and USSR competed for nuclear supremacy. By the 1960s both had developed intercontinental ballistic missiles which could be launched far away from their target, and submarine-launched missiles which could sneak up without any radar warning. This situation came to be known as Mutually Assured Destruction (MAD) or ‘deterrence’. Never mind who attacked first – both nations would be damaged to the point of collapse. This meant, the theory went, that war would be suicide and so no country would risk it. But far from keeping the arms race under control, MAD provoked the production of thousands of nuclear weapons by both superpowers, each striving to possess enough firepower to launch a nuclear first strike that destroyed the ability of the attacked country to respond.
The climax of diplomatic brinksmanship came in early 1962 when the US discovered that Russia was placing missiles in Fidel Castro’s Cuba, allowing for a nuclear attack on the US mainland. The two superpowers came terrifyingly close to a nuclear war, averted by a last minute compromise.
Join the club
In the meantime, three more countries had joined the nuclear club. The British Government was determined to get its own bomb. As Foreign Secretary Ernest Bevin bluntly put it: `We have got to have this thing over here whatever it costs… and we’ve got to have the bloody Union Jack on top of it.’ Bevin got his wish in October 1952. From 1958, Anglo-American co-operation meant that Britain’s nuclear arsenal was dependent on the US for its operation. France launched a civil nuclear research programme in the 1950s, a by-product of which was weapons-grade plutonium. Under Charles de Gaulle it successfully tested a nuclear bomb in 1960. China – with help from a subsequently regretful Russia – was able to test an A-bomb in 1964, a nuclear missile in 1966, and an H-bomb in 1967. China is the only state committed to using its nuclear weapons only in retaliation to a nuclear attack.
Resist and control
As the danger grew, public opposition to the bomb snowballed. In 1950, the ‘Stockholm Peace Appeal’ secured 500 million signatures from 79 countries calling for nuclear weapons to be banned. Shock at the scale of radioactive contamination at Bikini Atoll provoked calls for a ban on nuclear testing. In 1958, the Campaign for Nuclear Disarmament was launched in Britain. Anti-nuclear marches attracted tens of thousands, and dedicated activists engaged in civil disobedience, some undergoing lengthy prison sentences.
The first serious attempts by politicians to reduce tensions and control the spread of nuclear weaponry were prompted by the Cuban Missile Crisis. A military hotline was installed between the US and Soviet presidents, aimed at improving communication and avoiding dangerous misunderstandings. The two superpowers signed the Partial Test Ban Treaty in 1963, agreeing not to test nuclear weapons in the atmosphere, underwater, or outer space. Testing underground continued.
To a cultural backdrop of ‘make love not war’ and ‘ban the bomb’, the late 1960s was a period of great optimism about disarmament. Several arms-control treaties were signed, culminating in 1968 with the Nuclear Non-proliferation Treaty (NPT). Signed by most countries, it committed the five nuclear weapon states (NWS) – France, China, USSR, Britain, US – not to ‘assist, encourage, or induce’ a non-nuclear weapon state (NNWS) to acquire nuclear weapons. NNWS agreed in turn not to develop such a capability. This has largely been adhered to. Unfortunately, a commitment within the Treaty to disarm has not been complied with by the NWS. The NPT also enshrines the right of all states to develop nuclear energy, which has proved deeply problematic because the transition from civilian to military capability is relatively simple.
Star Wars and mass protests
Nuclear arsenals continued to grow in the 1970s. In 1979 British and German leaders agreed to allow the US to site 572 US Cruise and Pershing missiles on their territory, with Italy, Belgium and the Netherlands soon signing up as well. In 1981, Ronald Reagan came to power. Treaties were out, and talk of fighting a global thermonuclear war was in. He announced plans for a ‘Strategic Defense Initiative’ – known as ‘Star Wars’ – to enable the US to make a nuclear attack on the USSR and protect itself from retaliation.
Fears that the US was planning to fight a nuclear war with the USSR in Europe sparked widespread concern. The first half of the 1980s saw a million people march for nuclear disarmament in New York City. Hundreds of thousands took to the streets across Europe in the biggest protests since the Second World War. With New Zealand leading the way, towns, cities and countries declared themselves ‘nuclear free zones’.
The Cold War thaws
When Gorbachev came to power in 1985 it was clear to him that the USSR could no longer afford an arms race with the West. He began to roll back military spending and disarm the Russian nuclear arsenal. He initiated serious negotiations with Reagan who, just before being elected to a second term had changed his position openly to embrace disarmament. A flurry of arms control agreements followed.
As the USSR dramatically disintegrated in the late 1980s, the threat of nuclear apocalypse at last seemed to have receded. In the following decade, the US and Russia both halved their stockpiles of nuclear weapons, from a peak of 65,000 in 1986. But this was by no means the end of world – or nuclear – history.
Nuke kids on the block
By the end of the 20th century the five original nuclear weapons states no longer had a monopoly. Israel has never officially confirmed or denied its possession of the bomb, but in 1986 the existence of nuclear warheads was leaked to the press by technician Mordechai Vanunu. He then spent 18 years in prison for treason. In 1998 India ran tests and declared it had the bomb. National jubilation was quickly dampened when arch-rival Pakistan responded with successful tests, raising the spectre of a South Asian nuclear war. In January 2004 it emerged that the revered head of Pakistan’s nuclear programme, Dr AQ Khan, had been secretly selling nuclear weapons capability to Libya, Iran and North Korea. Thanks in large part to Khan, North Korea announced in 2003 that it was building a bomb. Its test in October 2006 was more of a ‘fizzle’, but enough to bring North Korea into the nuclear club.
Comments on Nuclear weapons: a history
Leave your comment | <urn:uuid:f71ad217-a0bd-4a23-9d19-5cf9576e225e> | 3 | 2.765625 | 0.067679 | en | 0.962707 | http://newint.org/features/2008/06/01/nuclear-weapons-history/ |
Video: Huge Dust Devil Prowling Mars
Dick writes about Earth and planetary science for Science magazine.
Credit: NASA/JPL/University of Arizona
Earth may have terrifying tornadoes, but when it comes to dust devils, Mars has us beat. A camera onboard the Mars Reconnaissance Orbiter has captured a stunning example of a swirling funnel of dust spinning up to an altitude of 20 kilometers. (The animation above provides a side view.) On Earth, tornadoes often reach such heights, but dust devils seldom reach up more than a few hundred meters. That's because dust devils only draw their energy from the solar heating of the surface; tornadoes also tap the heat energy from the condensation of water vapor in a tornadic storm. Mars is too dry for that, but the thinness of its air allows dust devils to soar, even on their restricted energy diet. Astronauts wouldn't be knocked off of their feet if caught in one, but martian dust devils are strong enough to play many roles. They loft dust high into the atmosphere between major dust storms. Some Mars scientists suspect dust devils generate enough static electricity to produce bleach-like chemicals that consume any organic matter—and any living thing—in martian soil. And dust devils have certainly lent NASA a hand; they occasionally blow the dust off a rover's solar cells, letting it power back up and keep on truckin'.
See more videos. | <urn:uuid:1b14dcb8-c7e2-4b0e-a0da-0811eec9c2cb> | 4 | 3.8125 | 0.559533 | en | 0.901686 | http://news.sciencemag.org/2012/04/video-huge-dust-devil-prowling-mars?mobile_switch=mobile |
Microsoft says the cloud will generate millions of jobs -
Microsoft claims that there will be millions of jobs created from the building of cloud based computing.
The findings come from Volish commissioned research from beancounters at IDC. IDC said that cloud computing will create nearly 14 million new jobs globally by 2015 and cash made from cloud based innovation could reach $1.1 trillion per year by 2015.
John F. Gantz, chief research officer and senior vice president at IDC, said that cloud computing should be a no-brainer for organisations. However, it should not cause the loss of jobs as people think, but should be a job creator. Cynics might wonder where they have heard wealth creator before.
Some industries will generate job growth at different rates, and public cloud investments will drive faster job growth than private cloud investments. The report also notes governments can influence the number of jobs created by cloud computing within individual countries.
Of course, there could be a few problems with this glorious vision. For a start, it does not appear to have factored in the fact that the US government wants to snoop on all data which goes through US companies.
This means that if any European company signs up to a cloud they would be sensible to have all their data stored by a European supplier. While this does still mean that IDC's predicted jobs come about, until the US government starts seeing sense about cloud based snooping, it is unlikely to help Microsoft much.
The full report can be found here. | <urn:uuid:77d6d979-8c4e-4b05-bd40-7133fe23427e> | 2 | 2.09375 | 0.021083 | en | 0.962983 | http://news.techeye.net/internet/microsoft-says-the-cloud-will-generate-millions-of-jobs |
Thursday, 3 January 2013
Mr Jerrold and Mr Caudle
Born on this day in 1803 was Douglas William Jerrold, one of those industrious Victorians writers who seem never to have slept. He was a successful dramatist (his first staged piece written when he was 14), a hugely prolific critic and journalist, a famous conversationist and wit, friend of Dickens, founder-editor of half a dozen magazines and a mainstay of the early Punch. It was there that he published the work for which he is still (just) remembered - that gem of Victorian comedy, Mrs Caudle's Curtain Lectures. These are verbatim accounts, written from memory (as a kind of bitterwseet memorial) by the widowed Mr Caudle, of a series of withering monologues delivered by his wife as the hapless Mr C climbed into bed in hope of sleep - only to be reminded of some indiscretion that would surely bring about in due course the fall of the house of Caudle. A naturally generous and convivial type, Mr C (pictured with a friend at his club, The Skylarks) is sometimes a little the worse for wear when he comes to bed, and knows what he must expect. On other occasions, though, it is some insignificant and barely noticed lapse that has set Mrs Caudle's dark imaginings to work, and he must be forcibly reminded of the inevitable consequences. Here, for example, has has thoughtlessly lent an umbrella. Oh dear...
'BAH! That's the third umbrella gone since Christmas.
"What were you to do?
"Why, let him go home in the rain, to be sure. I'm very certain there was nothing about him that could spoil. Take cold, indeed! He doesn't look like one of the sort to take cold. Besides, he'd have better taken cold than take our only umbrella. Do you hear the rain, Mr. Caudle? I say, do you hear the rain? And as I'm alive, if it isn't St. Swithin's day! Do you hear it against the windows? Nonsense; you don't impose upon me. You can't be asleep with such a shower as that! Do you hear it, I say? Oh, you do hear it! Well, that's a pretty flood, I think, to last for six weeks; and no stirring all the time out of the house. Pooh! don't think me a fool, Mr. Caudle. Don't insult me. He return the umbrella! Anybody would think you were born yesterday. As if anybody ever did return an umbrella! There—do you hear it! Worse and worse! Cats and dogs, and for six weeks, always six weeks. And no umbrella!
"I should like to know how the children are to go to school tomorrow? They sha'n't go through such weather, I'm determined. No: they shall stop at home and never learn anything—the blessed creatures!—sooner than go and get wet. And when they grow up, I wonder who they'll have to thank for knowing nothing—who, indeed, but their father? People who can't feel for their own children ought never to be fathers.
"But I know why you lent the umbrella. Oh, yes; I know very well. I was going out to tea at dear mother's to-morrow—you knew that; and you did it on purpose. Don't tell me; you hate me to go there, and take every mean advantage to hinder me. But don't you think it, Mr. Caudle. No, sir; if it comes down in buckets-full I'll go all the more. No: and I won't have a cab. Where do you think the money's to come from? You've got nice high notions at that club of yours. A cab, indeed! Cost me sixteenpence at least—sixteenpence! two-and-eightpence, for there's back again. Cabs, indeed! I should like to know who's to pay for 'em; I can't pay for 'em, and I'm sure you can't, if you go on as you do; throwing away your property, and beggaring your children—buying umbrellas!
"Do you hear the rain, Mr. Caudle? I say, do you hear it? But I don't care—I'll go to mother's to-morrow: I will; and what's more, I'll walk every step of the way,—and you know that will give me my death. Don't call me a foolish woman, it's you that's the foolish man. You know I can't wear clogs; and with no umbrella, the wet's sure to give me a cold—it always does. But what do you care for that? Nothing at all. I may be laid up for what you care, as I daresay I shall—and a pretty doctor's bill there'll be. I hope there will! It will teach you to lend your umbrellas again. I shouldn't wonder if I caught my death; yes: and that's what you lent the umbrella for. Of course!
"Nice clothes I shall get too, traipsing through weather like this. My gown and bonnet will be spoilt quite.
"Needn't I wear 'em then?
"Indeed, Mr. Caudle, I shall wear 'em. No, sir, I'm not going out a dowdy to please you or anybody else. Gracious knows! it isn't often that I step over the threshold; indeed, I might as well be a slave at once,—better, I should say. But when I do go out,—Mr. Caudle, I choose to go like a lady. Oh! that rain—if it isn't enough to break in the windows.
"Ugh! I do look forward with dread for to-morrow! How I am to go to mother's I'm sure I can't tell. But if I die I'll do it. No, sir; I won't borrow an umbrella. No; and you sha'n't buy one. Now, Mr. Caudle, only listen to this: if you bring home another umbrella, I'll throw it in the street. I'll have my own umbrella or none at all.
"Ha! and it was only last week I had a new nozzle put to that umbrella. I'm sure, if I'd have known as much as I do now, it might have gone without one for me. Paying for new nozzles, for other people to laugh at you. Oh, it's all very well for you—you can go to sleep. You've no thought of your poor patient wife, and your own dear children. You think of nothing but lending umbrellas!
"Men, indeed!—call themselves lords of the creation!—pretty lords, when they can't even take care of an umbrella!
"I know that walk to-morrow will be the death of me. But that's what you want—then you may go to your club and do as you like—and then, nicely my poor dear children will be used—but then, sir, then you'll be happy. Oh, don't tell me! I know you will. Else you'd never have lent the umbrella!
"You have to go on Thursday about that summons and, of course, you can't go. No, indeed, you don't go without the umbrella. You may lose the debt for what I care—it won't be so much as spoiling your clothes—better lose it: people deserve to lose debts who lend umbrellas!
"And I should like to know how I'm to go to mother's without the umbrella! Oh, don't tell me that I said I would go—that's nothing to do with it; nothing at all. She'll think I'm neglecting her, and the little money we were to have we sha'n't have at all—because we've no umbrella.
"The children, too! Dear things! They'll be sopping wet; for they sha'n't stop at home—they sha'n't lose their learning; it's all their father will leave 'em, I'm sure. But they shall go to school. Don't tell me I said they shouldn't: you are so aggravating, Caudle; you'd spoil the temper of an angel. They shall go to school; mark that. And if they get their deaths of cold, it's not my fault—I didn't lend the umbrella."
"At length," writes Caudle, "I fell asleep; and dreamt that the sky was turned into green calico, with whalebone ribs; that, in fact, the whole world turned round under a tremendous umbrella!"
Mrs Caudle's Curtain Lectures quite often turns up in bookshops in nice illustrated Victorian editions. It has also been reprinted in the excellent series of Prion Humour Classics, with an appreciative introduction by Peter Ackroyd, no less.
1. It’s good to be reminded of Mrs Caudle’s Curtain Lectures. The genre to which they belong, that of hen-pecked husband humour, has, probably for reasons of political correctness, fallen into disfavour. Other examples include the seaside postcards of Donald McGill – can you imagine anyone on the Left celebrating them as Orwell did? – and (more recently) Last of the Summer Wine. I didn’t realise that Mrs Caudle was still in print; she’s also available as a free download on Project Gutenberg. Incidentally, Anthony Burgess was a big fan.
2. I think deafness might have been the answer for poor Mr Caudle - before murder became absolutely essential. I feel depressed just reading her 'lecture'. Blimey! Poor old boy. | <urn:uuid:94e852c6-ff62-404f-b5a0-a8263962a6fe> | 2 | 2.078125 | 0.039547 | en | 0.982555 | http://nigeness.blogspot.com/2013/01/mr-jerrold-and-mr-caudle.html |
NIST logo
Bookmark and Share
Current and Future Research
Precision X-ray Wavelengths
The precise X-ray wavelength measurements have importance in testing QED in the presence of strong electric fields. The use of an EBIT to measure QED effects in highly charged ions under conditions free of Doppler shift corrections (the main systematic effect in most previous experiments) was pioneered by the LLNL EBIT group. Our activity in this area is guided by the work of the atomic theory group at NIST (Mohr, Kim) and the experimental work of Deslattes and Chantler. Additional theoretical guidance is provided by outside groups such as that of Indelicato, Safranova, Dubau, and Drake. Our measurement of the resonance lines of He-like vanadium [40] is one of the most accurate to date.
X-ray Polarization
In the absence of strong external electric or magnetic fields, atomic systems of different magnetic sublevels (but otherwise having the same principal and angular momentum quantum numbers) are degenerate in energy. Because of this degeneracy the measurement of the energies of atomic transitions is not enough to get a complete characterization of the quantum state of the system--the magnetic quantum numbers are also needed. The magnetic quantum numbers are related to the spatial orientation of the atom or ion. In the case of pronounced external symmetry, spatial effects can be important. For example excitation in cylindrically symmetric situations can lead to oriented or aligned systems where the magnetic substates with different angular momentum projections have the same energy but different populations. If such a system undergoes a transition which results in a photon or electron emission the emitted radiation will show anisotropic and polarized behavior.
The above situation is relevant in several natural and laboratory environments. Generally cylindrical symmetry applies for cases when atomic excitation takes place in the interaction by a directed beam of charged particles. An astrophysical example is the atomic excitation occurring in solar flares where a plasma made of charged particles (ions and electrons) moves along strongly directed magnetic field lines. Excitation due to directed flow of particles can be observed also in supernova shock waves. The same situation occurs in many of the laboratory experiments where electron or ion beams excite or ionize atoms or ions.
The EBIT capabilities for measuring electron impact ionization, excitation and recombination cross sections have already been demonstrated in several cases. Transitions in highly charged ions usually involve the emission of photons in the X-ray region. This fact offers an obvious choice of the use of X-ray analyzers (usually solid state detectors and crystal spectrometers) for measuring the above mentioned cross sections. In the EBIT device the ions interact with a narrow (about 0.06 mm diameter) beam of electrons. This very well collimated electron beam acts as a quantization axis making the cylindrical symmetry a natural case for electron-ion interactions inside an EBIT machine. Because of this fact care has to be taken in interpreting X-ray line intensities when they are used for obtaining electron-ion interaction cross sections. They can be strongly affected by anisotropic and polarized emission. The latter is most important in the case of crystal spectrometers where the energy dispersion is polarization selective. On the other hand the measurement of the polarization or the angular distribution of the X-ray emission can give information about the magnetic sublevels involved in the electron-ion collision [7]. This information remains hidden in a simple energy dispersive measurement because of the degeneracy of the magnetic sublevels.
Visible and UV Spectroscopy
Electron-Ion Collisions
Electron Impact Ionization
e- + A(q) yields e- + e- + A(q+1).
Radiative and Dielectronic Recombination
e- + A(q+) ? A(q-1)** yields A(q-1) + photon
Trapped Ion Dynamics
Ion Orbits
Charge state simulation
Atomic Lifetimes
Ion-Surface Interactions
NIST - National Institute of Standards and TechnologyNIST Physics Laboratory Home NIST EBIT Home | <urn:uuid:71582c33-ed62-4d5a-a4b2-2d7ac58b896d> | 3 | 2.828125 | 0.073837 | en | 0.914365 | http://nist.gov/pml/div684/grp01/spectroscopy-research.cfm |
Small retail properties support the needs of modern life, from eating to dry cleaning to movie rentals. Shoppers can cash a check, pick up a prescription, get their hair styled or buy flowers at such stores. Indeed, strip malls and freestanding outparcels have become embedded into the landscape as the archetypal backdrop of our daily routine. Although these stores are often ubiquitous and generic by design, they are not equal in the eyes of lenders.
Owners, caught up in the frenzy to lease their properties, often lose sight of how lenders will view their decisions. Keep in mind that lenders require detailed information about lease terms and rates, tenant financials, demographics and a variety of other factors before they agree to mortgage financing. Small retail real estate loans usually total $5 million or less, and borrowers can increase their chances if they know what issues are typically of concern to lenders.
For example, lenders may accept some irregularities in a deal if the owner is an accomplished borrower with a substantial retail portfolio. However, if the landlord has no track record, the lender will put more emphasis on location fundamentals and lease dynamics.
A key piece of information is a demographic study of the location. What are the traffic patterns? What is the income level in the surrounding community? Who is the competition? Does the location have any special features that could hurt or help business? For example, a shopping center might be the only stop between a business park and a residential area, meaning workers will stop by for bread, prescriptions, haircuts or car repairs.
When possible, lenders want to review historical figures on tenant sales per square foot. How does this store perform compared with national, state and regional averages? Lenders recognize that landlords cannot always obtain specific store information, but industry databases can sometimes provide average figures, and sales numbers for nearby shops can be useful. If a grocery store reports annual sales of $700 per square foot, a lender can extrapolate potential sales for a drugstore pad site.
Limited Attention Span
Sales figures are more important when the loan will finance a single-tenant location, because the cash flow is dependent on a sole retailer. But in either case, lenders want reassurance that the property will continue to generate revenue. Those in the business say the walk from the penthouse to the outhouse is shorter than ever, meaning today's hot retail concept is tomorrow's failure. Nothing proves out a location better than actual sales.
The tenant mix is another critical factor in the success of a property, and lenders prefer diversification. Take, for example, a typical shopping center with 30,000 square feet. A mix of six or seven tenants, each of which leases 4,000 square feet to 5,000 square feet, offers some protection if one tenant goes dark. The same center with one 25,000-square-foot tenant and a second tenant with 5,000 square feet is riskier.
Landlords also should understand the difference between their perception of a quality tenant mix, and the lender's perception. For instance, an owner may rent a suite of small retail spaces to individual hair stylists for $24 per square foot. The deal looks good on paper, but how realistic and sustainable are these leases? The lender will question the longevity of the concept, particularly if going rents for retail in that location are considerably lower. If a landlord signs a tenant for above-market value, the lender probably will not give extra credit for that. An owner may think he is sitting on a gold mine, but the lender may view it very differently because he has seen too many golden geese turned into Christmas dinners.
Another issue to consider is the lease expiration date. Lenders don't want all the leases to mature at the same time. If a landlord leases a shopping center to six tenants in the same year, he or she should stagger the dates to reduce the impact of lease maturation on cash flow.
Lease Terms are Important
A related issue applies to single-tenant leases. Lenders will be sensitive to leases that expire within the loan term. Ideally, large rollover events won't be hurt by an inability to refinance the loan at maturity due to the lack of remaining lease term. That said, lenders should be willing to work with experienced borrowers who have strong leasing teams and have proven their ability to manage rollover risk. Furthermore, this risk is often mitigated by the properties locational dynamics as proven by high actual or surrounding retail sales figures.
Landlords sometimes run into trouble when lenders ask for estoppels and subordination non-disturbance agreements (SNDAs), two types of documents that acknowledge the lease. A tenant signs an estoppel that says the lease is valid and states the terms, including rent, length of lease and square footage. The estoppel indicates the landlord is not in violation of the lease agreement.
An SNDA recognizes the priority of the mortgage to the lease and says that the lender will allow the tenant to operate under the terms of the lease. In both cases, some retailers will have their own forms for these documents and the lender may want a very specific form used. Leases should be crafted with strong language to ensure that the tenant is not able to hold the landlord hostage when seeking a mortgage. A landlord may want to craft specific compliance periods and financial penalties into the lease agreement.Unfortunately, with large national tenants this is easier said than done.
It's not a Gamble
The type of loan is another factor to consider when compiling a lease application. Permanent loans with maturity dates usually stretching to 10 years are often non-recourse loans, meaning the borrower, as an individual, is not responsible for the repayment of the loan. If the borrower defaults, the lender can only pursue the property, not the borrower's personal assets. While a number of sins can be forgiven with a recourse loan, the property dynamics take on greater weight with a non-recourse loan. At the end of the day, recourse typically translates into flexibility, and non-recourse into comparative rigidity. That said, roughly $90 billion of non-recourse securitizable term loans were entered into in 2004, so the requirements imposed by permanent, non-recourse lenders could not have been too restrictive. In the end, most prudent property investors share in the same structural concerns as lenders.
The degree of recourse may also temper a lender's perspective on the acceptability of secondary or mezzanine financing. This type of debt structure is often prohibited, because lenders don't want to take loans if they have additional encumbrances. They prefer to see 20 percent to 25 percent more cash flow than what is required to pay the interest on the debt, and other types of financing can eat into that cushion. In addition, lenders want to underwrite the controlling parties and do not want to be put into a position where control can change hands without their consent due to nonpayment on the junior liens.
Small doesn't mean simple. Lenders, like equity investors, have a multitude of concerns. To the extent that these concerns are addressed, the odds of your number coming up on the roulette wheel can be greatly enhanced.
National director of LaSalle Bank Real Estate Capital Markets | <urn:uuid:0b9ee8ba-ab53-4475-96d3-7dc74eed974a> | 2 | 1.898438 | 0.083283 | en | 0.942518 | http://nreionline.com/mag/increasing-odds-retail-loan-roulette |
Thursday, March 24, 2011
Class on Homosexual Marriage (2nd Class on the subject)
1. Father,
At 15:58 of the first video, you ask how gay marriage will not lead to polygamy. You say there's already some movement in that direction (of approving of polygamous marriages).
The short answer is that the 37th Congress of the United States passed the Morril Anti-Bigamy Act and it was signed into law by President Lincoln in 1862. It has gone unchallenged for nearly 150 years.
That is your answer to "How do we know gay marriage won't lead to marriages comprised of more than 2 people." Christians, fearing the Mormon religion would overtake the population of other sects of Christianity through accellerated procreation, put this law into place as a response to the perceived threat. It is the only time the Federal Government has taken the initiative on marriage issues and not left them up to the states, and the only time a Law of the United States was passed specifically in response to a religious body (the Mormon Church).
....just sayin'.....
2. Final comment on the first video: While I agree that gay marriage proponents saying "well marriage has always been between 2 people" is not an intellectually viable argument for overcoming the gay marriage opponent argument of "well marriage has always been between a man and a woman"...I would also point out that neither is that argument an intellectually viable argument for overcoming the gay marriage proponent argument.
We hear Maggie Gallagher of NOM, Tony Perkins of Family Research Council, and Focus on the Family and Pastor Rick Warren (among others) say that in the past 5000 years marriage has always been between one man and one woman. But that's not exactly true. In fact, TRUE religious marriages have outnumbered one man/one woman marriages, and they have been polygamous in nature, meaning that the TRUE 5,000 year history of heterosexual has been between one man and at least one (often more than one, and sometimes FAR more than one woman).
So to fearmonger that gay marriage would lead to polygamous marriages really is one of the most intellectually dishonest arguments there are from gay marriage opponents, as polygamous marriages have been around for THOUSANDS of years PRIOR to gay marriage becoming an equality issue in the United States.
And don't EVEN get me started about the trend from Jesus' time up until the present, where divorce has been more of a traditional component of traditional marriage than "till death do just the two of us part" ever has (and daresay) ever will be.
As I have shared with many folks I have had this discussion with, it is ironic to me that during the same timeframe that the Church (both Catholic and Protestant) have been focused on gay marriages, the heterosexual divorce rate has skyrocketed to the point that over 1,000,000 children witness the divorce of their parents' first divorce, and half of them witness the their parent's second divorce, before even reaching the age of 18.
Again....just sayin'.....
3. On the second video, now. At the very beginning, you seem to feign ignorance at how many states currently permit same sex marriage, citing only Massachusettes, and claiming you only have 24 hours in a day to research the information you give your students. That is NO excuse for any educator to not be educated on the issue on which they themselves attempt to teach. You would have to be living under a rock to not know that 5 states and the District of Columbia approve same gender marriage, and to pretend you didn't know, is so far, the only disingenuous thing for which I believe a critique is TRULY in order.
Our students deserve better, Father.
4. On the last part of the second video, you allude to preachers being arrested for hate speech in Canada for "simply getting up in church and expressing the church's teaching on homosexuality", and fear mongered that it could happen in America as well, even as you later hem and hawed until finally admitting to the student questioning that you would have to "brush up on your hate speech laws" to see if what you had just told them was indeed true.
And just as you complained about the "subtle" words like 'justice' and 'fairness" being used by the pro gay marriage "camp" as you call us, you yourself used the "subtle" words like "free speech", "religious freedom" etc. in order to strike fear in the general public and fellow pastors for something that 1). you aren't even sure is a valid point, and 2). that you should know full well is protected by freedom of speech and freedom of religion in America.
But that damage is already done in that student's mind....or another student's mind....regardless of how you hem and haw about not being familiar with hate speech laws, there is a very real potential for impressionable young minds to take you at your first word that pastors, preists and preachers could one day be prohibited from sharing with their congregations regarding their religious beliefs regarding homosexuality.
In this case, you serve as little more than an echo chamber, and have committed an act of "indoctrination"....that which our opponents shout from the rooftops that they fear the most if gay marriage becomes legal.
Hypocrisy is the word that comes to mind in that particular instance. And THAT is why Canada and other countries consider it hate speech. Because as those impressionable young minds in church are being taught that Leviticus proscribes the death penalty for any 2 men lying with each other as they would with women, there becomes a "justification" in their minds that bullying fellow students for being gay, yeah, even killing homosexually identified individuals, is just a okay with God, regardless of how the church attempts to position itself as "not the far right wing".
Tricky, tricky.
5. Third video: At the very beginning you mention Catholic Charities being "forced to stop providing adoption services" because it would not provide adoptive children to same gender couples, saying "it's already being forced upon a religious instituion...."
I paused the vid here, so I may have to revise this statement, but....
Nobody ever forced Catholic Charities to be in business in the first place. If the Church was more successful in preventing unwanted pregnancies in the first place (perhaps through a papal dispensation permitting Catholics to employ contraception), then there would be less of a need for adoption in the first place. Secondly, Catholic Charities can continue to provide all the adoption services they want to, as well as to continue to refuse to place adoptive children with same gender couples....they simply can no longer recieve the MILLIONS in DOLLARS of FEDERAL, STATE and LOCAL TAXPAYERS' funding in order to do so.
This is one of those REALLY deceptive misperceptions that I hope comes to an end sooner, rather than later....
6. The first argument you (mis)represent is that gay marriage proponents are petitioning the government to "permit friendships?" And then you go on to say that "friendship is great, but don't call it marriage." We AREN'T, and you strike me as intelligent enough to understand the difference. You may personally consider a single gender loving, committed, monogamous and lifelong relationship as nothing more than a "friendship," but I assure you we do not. We call our marital relationships "marriage" because that's what they are. We understand the difference between friendship and marriage. We have NEVER asked the state to recognize "friendships", and you once again demonstrate a willing obsfucation for that which you seem to deliberately misrepresent. I call foul on that particular point.
Father, I would have to search high and low to find that coming from our "camp." However, I am willing to reconsider my statemet should you be willing to cite a source for that assertion.
7. At 6:23 of the third video, the question comes up: why not have civil unions for gays instead of marriage. There are a few reasons. 1. The Morril Anti-Bigamy law would not prevent more then 2 partners in a civil union, because a civil union would not be "marriage" by the state's definition. If the church is truly concerned about polygamous relationships, do we really want to instill into society a way to provide heterosexual polygamists with a way to side step the bigamy laws?
You also say that it would be unfair to deny a "civilly unioned" couple the same benefits if they were not having sexual relations. But again, you have no tool at the state level to prevent unwanted polygamous relationships among heterosexual couples.
Final point is that a civil union is not marriage. Only marriage is marriage. Marriage has been comprised of 1 man and 700 wives by King Solomon himself, whose son, David went on to become King of Israel, and whose descendants eventually gave rise to Jesus Christ Himself, and the world didn't come to an end.
Civil unions is a half baked version of marriage under which marriage laws would not be applicable. Every state would have different terms for what constituted a civil union, and the federal and state governments would have no legal standing for granting or denying income tax filing status. The tax codes ask "married" or "single". There is no "civilly unioned" on a tax form which would permit joint filings.
Finally, and quite frankly, those of us who are fighting for marriage equality are not fighting to pass on a half baked version of marriage in the form of civil unions or domestic partnerships to future generations of gay, lesbian, bisexual and/or transgender people.
Finally, the church then too would stand to become further divided over the issue as each denomination of Christianity fought over whether or not they would think God was okay with civil unions for the lgbt parishioners. Father, this is a can of worms that can only become wormier, and if civil unions become the norm, will only drive the courts to grant marriage equality more quickly in order to prevent enshrining even more discrimination into our laws.
Civil unions are an important (albeit tentative) first step toward marriage, but they are not marriage, plain and simple.
8. Next, you ask what compelling governmental interest the state would have in granting anyone marriage, and the response you agree with is to continue the society [through procreation].
But society is not going to grind to a halt if gays have their relationships recognized as the marriages we ALREADY consider them to be. You say in the first part of the video that the Catholic Church is the largest adoption agency in the world. Why then would the state be concerned with the population dying out when in fact we already have MILLIONS of unwanted children?
To say that society would die out if gays marriages become recognized by the state is a non sequitor at best, and another attempt at fear mongering at worst. IMHO.
9. In closing, let me say that as the one student mentioned, "at least you're not as "offensive" as most gay marriage opponents can and have been." I found the class thoroughly enlightening and wish something like this would have been taught in my school as I struggled with coming to terms with my homosexuality identity. However, if this is going to continue in our schools, I do hope the proponents of gay marriage would be included in the discussion as well in order to give students both sides of the issue.
You stated early in the first series of videos that "society already gives the other side of the argument", but as pointed out in some of my comments, that did not prevent you from mischaracterizing nor misrepresenting many of the pro gay marriage equality proponents points.
All in all, I would rate this video series as a B and would deduct points as already explained in previous comments.
I understand this series is actually 4 hours long, and I hope to view the rest of it at some point. Suffice for now to say that I am grateful for the opportunity to have a look at the inside of what I would continue to consider the same "indoctrination" of students that our opponents most vocally denounce ocurring, unless it seems, it is an indoctrination of thier own points of view. And that is of course, a form of bigotry, prejudice, or hypocrisy...whichever adjective you prefer to use.
Bottom line summary: Thank you for your time, and for posting these videos, though I do wonder what your purpose of posting them was. Is it to further indoctrinate other students? Is it to provide a model by which this can be done in schools all over the country? Is there some self-serving interest of your own, such as being promoted to the speaking circuit in order to collect large fees for your lectures? I'm just really curious on that point.
In closing, let me just say I hope we develop a dialogue that takes a closer look at some of the counter points the I find in Scripture as well as society either on Twitter, or via my email address:
PEACE and BLESSINGS be yours, even as we appear to have sharp disagreements, I think you have demonstrated the ability to be respectful, and that's all most gays are really asking for from our opponents: respect...and a bit of that dignity the Catholic Church says we are due. :)
Hope to talk to you more soon.
Brian Anthony Bowen
Author, The Bed Keeper: A Biblical Case For Gay Marriage.
PS: If you decide to delete these comments from here, that's cool too. I've saved them on my computer, and can repost under another name if that's what it comes to. Otherwise, I expect you to welcome comments from all viewpoints, based on the validity of your own claims having the ability to withstand scrutiny, if they are indeed accurate, truthful, and Scriptural.
10. I have watched the videos and find that you are not telling the truth in many instances. As a teacher you should give the students facts. If you don't know the facts, you should not teach the course.
11. Watching your anti-gay class, I feel so thankful that I was raised Jewish!
12. Such hatred from a man of the cloth.
What would Jesus do?
Marriage equality takes place all over the world, and in 5 states and DC here in the US.
Can you--or anyone--tell me house straight marriage has been ruined thus far by marriage equality?
13. Former Catholic, former Hoosier here - John, you make me proud to be a gay, atheist New Yorker. I tried to kill myself when I was 20 and my head was filled with the kind of garbage you pour on your students. You are doing so much harm that you're not even aware of, it makes my heart sick for your students who may be struggling with their identity.
14. Just because something has always been that way doesn't mean it should stay that way. Slavery was the norm for thousands of years...are you saying slavery should still be legal?
15. Oh, oh, it's "Eunuchs for the Kingdom of Heaven" preaching on sexual morality to the young -- again. Personally, I feel that if you don't play-uh the game, you don't make-uh the rules. But hey, Pope Benedict XVI has been in a cozy and committed relationship with Monsignor Georg Ganswein for a few decades, so maybe he's qualified to speak. And if I had a penny for every single gay and closeted priest I knew, I could quit my job and retire tomorrow.
16. I feel that Father John does not mean any hate towards any gays. He is simply saying what the Church believes. I personally appreciate his teachings.
I see it both sides: Live and let live. But if you don't believe in the Catholic teachings then don't be Catholic.
17. Stick to what Catholic priests know best, molesting and abusing children.
People should never take the word of a church that has committed the horribles crimes committed by so many priests.
What you think about the world is irrelevant.
18. I look forward to a class highlighting the widespread, decades-long rape of children by Catholic clergy, and the Church's orchestrated cover-up.
19. catholic homosexualMarch 28, 2011 at 5:20 PM
God bless you, Father. I am a young Catholic who has this problem. There is so much self-deception in the homosexual world. As the gay groups extend their political influence ever further, it is becoming almost impossible to hear the truth taught on these subjects. Homosexuals, Catholic and non-Catholic, need to hear the truth preached in love. Some will rage against you, but many of us - perhaps often silent for fear of the consequences of raising our voices - will be grateful to a good priest, who is living out his vocation to preach the truth that saves.
20. All you gays that posted here are all angry cowards who try to use abuse in the church as a reason to justify your own selfish desires. Why do you NEED marriage? Why can't you have civil unions with all the same legal benefits and be happy? Why do those who believe marriage is between a man and woman considered "haters" and so forth? Face it, people don't HAVE to agree with you, and are entitled to their opinion just as much as you are entitled to scream and shout about yours. And if you want to talk statistics, I'm sure you can find many more doctors and non-Catholic pastors, counselors, or even Homosexual Boy Scout troop leaders that abused children. How would it feel if every gay guy who worked in Scouts or other public service organizations was called a child molester? Wouldn't be a very fair generalization now would it? Again, if you aren't Catholic then why read these posts and come here to spread hate and vile insults? Somehow it always seems to be a one way street. Just remember to take the wooden beam out of your own eye before you try to remove the splinter in someone else's.
21. Well, for someone who doesn't have time to watch the videos, let me sum it up. If you commit sodomy and do not repent, you will burn in hell.
22. The "rape of children" by the way, was homosexual molestation of male teenagers. Get your facts straight.
23. I am a gay rights activist, and I have never met a same-sex marriage proponent who is also a supporter of polygamy. Those that are in favor of both are few and far between. So the idea that same-sex marriage begs the question "is polygamy next?" doesn't really compute.
Polygamy and same-sex marriage are 2 completely different issues, because we say that GENDER should not be an issue in marriage, not the # of people! If there is ever a polygamist movement, I and many other gay rights activists will stand firmly on "your" side, Father.
24. Sisters eyebrows are reaal clean, thats all im sayin.
25. The recent sexual abuse scandal in the Catholic Church is not a pedophilia issue - most of the abused were post-pubescent. It was a scandal of homosexuals raping teenagers. Get the homosexuals out of the seminaries and parishes and the sex abuse scandal ends. It is quite simple.
26. About the sexual abuse scandals - it doesn't matter if it was homosexual or heterosexual. Priests are to remain celibate and are to be protectors of human life, at all ages. The priests (and by the way pastors, rabis, coaches, scout leaders, teachers, parents, and everyone else who has participated in these acts) made poor choices and should receive consequences as well as help. However, it has nothing at all to do with the sexual identity. Before you bash the vow of celibacy, perhaps you should look into the reasons for it and the good that comes out of it. Just because it's not your cup of tea doesn't mean it's wrong or stupid. Saying that priests can't discuss sexuality because they don't have sex is like saying teachers can't teach history because they weren't there when it happened. Or that teachers can't teach about other cultures unless its their own.
I will also say, however, that it was an interesting point to learn that the issue the government had with Catholic Charities resulted in loss of federal funding and not revoking the right to aid adoptions was enlightening. That should have been made clear in the video as the government has a right to deem how its money is spent, keeping Church and State separate.
What I would absolutely LOVE to see is a live debate with Fr. John and a pro-homosexual marriage representative. I think we could all learn a whole lot, most importantly how to treat people with differing opinions with respect. I believe that, even though Fr. John may have had some facts incorrect or misinformed some students, he is nothing less than respectful in his postings, comments, and teachings. And that is a whole lot more than I can say for many of the anti-Catholic comments above.
27. Thank You for explaining the Church's teachings and position on topics such as this Father! Especially in a world where our children are bombarded from all directions on a daily basis with messages that conflict with those of the Church. Some people don't seem to understand the Church's teachings aren't open to debate! We appreciate all that you do and you are an exemplary role model for the students. I can go on and on about the way people are responding to you but the bottom line is . . . this is what we believe as a Church, the students are in your class to learn these lessons because we choose to have them there! Thanks Again!!!
28. Wow. Such hate from the self-appointed tolerance police in the comments here. The fact is that Father explained both arguments for and against same-sex "marriage" in a theology classroom at a Catholic high school. What did you expect him to say? He did so in a manner that was not at all "hateful" (show me where that is), "bigoted", or "indoctrinating" (a fancy word for "you said something I didn't like").
The critiques here, especially from BrotherBrianBowen, are uncalled for. Father's analogy to alcoholism and "prostitution" (he didn't use that word) were not meant to equate them with same-sex attraction, but to point out that clearly not every inclination, even if one is genetically predisposed should be indulged. If you really think he was equating them all, you missed the point of the analogy.
It is also unfair to criticize Father for not knowing each and every statistic on same-sex "marriage" by state off the top of his head. I'm sure students frequently ask him questions on particular issues where the details escape him in the moment. It is unreasonable to expect him to know everything about everything off the top of his head. But this criticism is really irrelevant, since the primary purpose of the class was not legal history but to give the theological and philosophical reasoning the Church offers for its opposition to same-sex "marriage" and homosexual activity (or any sexual activity outside marriage, for that matter).
Kudos to Father for explaining a difficult topic and one for which he was sure to get labeled if he didn't tell certain people what they wanted to hear.
Same-sex "marriage" supporters...many of us disagree with you, and it is not bigotry to say so. And if it is, then you are guilty of the same.
Ryan, a fellow Catholic high school theology teacher in the Bay Area.
29. @ Ryan,
You are absolutely right - teachers can't be expected to have all the details about everything, and it doesn't really matter anyway because this was a lesson about the Catholic Church's teaching - to students who pay to attend a school that will teach it! And I really wish that people who had watched this class had watched the first class as well. I also appreciate your comment, and Fr. John's explanation, that we, as humans that differ from animals, should learn to control our desires!
And for the haters, this comes from someone who does not agree that polygamy will follow homosexual marriage.
To the comment about friendships: I believe that Fr. John was saying that if anyone is allowed to marry anyone, what would stop any two people from marrying just for the tax or citizenship benefits? Would it not be even more difficult for the State to discredit fraudulent marriages? Do not misinterpret this comment: I do not intend to imply that all heterosexual marriages are honest and faithful, nor do I intend to imply that no homosexual relationships are honest and faithful. Merely playing devil's advocate here. And I believe that Fr. John, in other situations (be reminded that the topic of this class was homosexual marriages and the Church and he could therefore not cover all aspects of marriage in itself) has also discussed the sanctity of marriage and how society has severely diminished that - see posts/classes regarding contraception and its affect on society and marriage.
I have yet to see disrespect from the "marriage for one man and one woman only" camp on this blog, only from the same-sex marriage camp. Just because someone, or an institution, disagrees with you, and eloquently explains its position, does not mean that there is any disrespect to the individuals on the other side.
30. Kid above me.
He only taught one side. As the opposite side, learn at least that much. Thanks.
31. This comment has been removed by the author.
32. Hey Ryan,
Not sure where you got anything from me about the Father's analogy to alcpholism or prostitution.
I did however, share plenty on divorce, which you fail to even mention...and if you're also a theology teacher, it would seem to me that focusing on heterosexual marriages more, and focusing on gay marriages less, may just indeed be the answer the church needs in order to rein in the divorce rate.
Again, divorce harms the children involved. Gay marriage does not. Where are your priorities? Wait...your comments answered that already.
A better question: Why isn't preventing DIVORCE and unwanted pregnancies your priority...especially when that affect 10 times the amount of the population and the church?
33. Anonymous 3/29 10:11 PM- No, he did mention (and argue against) some of the arguments for same-sex marriage. This is a fact. You may not agree with him or with me, but he did give some of the arguments for. He even showed a CNN clip that showed a same-sex proponent speaking.
BrotherBrianBowen-You're right, you're not the one who critiqued that analogy. My mentioning it right after calling your many criticisms uncalled for did make it seem that it was you who said it. That wasn't my intent, I was speaking generally about some of the criticisms.
Concerning the rest of your comment at 3/30 8:30 PM...Red Herrings. Your post was condescending. What do you think my "priorities" are? And how do you know that preventing divorce and "unwanted" pregnancies are not a priority for me? These issues are all important. One at a time...
I didn't mention divorce. Why? Because I just didn't. You posted a LOT. I just didn't respond to everything, for no other reason than that I just didn't. I agree with you that the state of marriage is in trouble and that yes, children suffer tremendously because of it. And as a theology teacher, I talk to my students about a number of assume I spend all my time on one. Incorrect. But we ARE talking about same-sex marriage I chimed in. Wow your posts are condescending.
34. Father, it is dangerous to tell the truth, but this world needs truth tellers. I'm with you 100%. Your students might find this essay helpful:
Best wishes to you!
35. Father Hollowell,
Thank you for speaking the truth courageously in the face of this hatred and calumny. We're tired of the pro-homosexual lobby pushing their ideology on the rest of us, and screaming "hate" if we peacefully disagree with their choices. It seems they don't really believe in tolerance or free speech; it's their way or the highway.
Thank you again.
36. Keep on Keeping on Fr. Hollowell! These students need the truth. God Bless you.
37. God bless you and your work, Father. I find it ironic that those who wish you had included pro-homosexual viewpoints have not seemed to notice that you have published their comments on your blog, despite having no need to do so. Our teens are bombarded by the entertainment industry, the Internet, and indeed even their schools to embrace a form of tolerance that would eschew love, for true love is not afraid to name that which is evil as evil in an effort to redeem it. Weary not in well doing. I pray your Rome trip was restorative to you in body, mind, and spirit, and that you return to the battlefield, i.e. the classroom, ready once again to take up the arms our Lord has given.
38. Br. Brian condoms aren't the answer, even w/ wide spread use of birth control, there are still so many "unwanted" preg. Resulting in 50 million abortions we have seen. I have to wonder also what self seeking interest you had by posting several lengthy comments & then putting a book you wrote along w/ your address for all to see....just saying....ft. H. Keep up speaking the truth in charity & giving our kids the truth, no matter how hard it is to hear ....truth isn't always easy to swallow & digest. Speaking the truth some will scream HATE, but these same screamers have no problems screaming obscenities back to you they want tolerance and understanding but sure dont give it in return.
39. If you explore Fr. Hollowell's blog, you will notice that he does discuss divorce in many instances. Specifically, in relation to birth control. His explanation of the Church's stance, for which he has statistical evidence, connects birth control with higher rates of divorce AND unwanted pregnancies. For you to assume it has the opposite effect shows that you have not done the research.
40. Father John is teaching a THEOLOGY CLASS. People act as if he went into a math class and began a homily. This is a theology class at a Catholic school, he is charged with teaching these children what the Church says, not popular opinion, not both sides of the debate. If you don't like what he is teaching, don't watch the video-I don't watch videos of things I don't agree with, so why do you?
41. Father,
Again, forgive the late response, as I am just now reading your blog.
A bit of background to support what I am saying: My ex-husband was an EMT at where I work. He swept me off my feet and asked me to marry him very early on. Anyway, I rented out my house to my parents and, after we were married, moved in with him in his house (and took out a home loan on mine to pay off his credit cards). He dumped me the week after getting his Green Card. He got it through NACARA, but needed the wife and the house to look good because it turns out he had a criminal record...He is gay and was having anonymous gay sex while we were married. He and I had a long discussion (argument) after the fact about gay marriage and would he have used someone if he were able to marry a man.
His opinion and that of his friends?
That they don't believe in marriage between two men, only legal unions, because marriage is supposed to be monogamous and they all agreed that the men in long term relationships that they knew all had "open relationships." They said that they knew of literally zero relationships between two men that had lasted for over two years that didn't allow for sexual activity with other men "As long as there were no emotions."
This says quite a bit to me. Oh, and I still rent the house out to my parents-I figured the fast track to hell was kicking one's parents out onto the street, so my husband (my husband now) and I are living in an apartment until they are ready to buy my house...
Before the hate responses begin if anyone else reads this, I am stating what they said, I am not trashing anyone if they are in a monogamous relationship or stating that it's not possible. | <urn:uuid:781a6b2a-8dba-4824-b8ba-4f975ef295c2> | 2 | 2.09375 | 0.077249 | en | 0.978109 | http://on-this-rock.blogspot.com/2011/03/class-on-homosexual-marriage-2nd-class.html |
Save Search
Download Current HansardDownload Current Hansard View Or Save XMLView/Save XML
Previous Fragment Next Fragment
Monday, 19 March 2012
Page: 3432
Mr LAURIE FERGUSON (Werriwa) (18:39): I congratulate the member for Fremantle on moving this motion not only to outline the specifics of the Second World War incarceration but because it represents a broader international problem. After the Second World War, Stalin deported hundreds of thousands of Chechens, Volga Germans and Crimean Tatars. In Czechoslovakia the government expelled hundreds of thousands of Sudeten Germans. This is symptomatic of societies that distrust their citizens in times of conflict. I do not want to cover extensively the Second World War situation—other members have—but there are some very worrying historical patterns in this country. In the First World War, because we had doubts that King Constantine would stay onside with the allies, every Greek family in this country was investigated by the precursor of ASIO, who went to their neighbours and asked them about their loyalty. In the Riverina, particularly, in both world wars the large number of German settlers were very heavily persecuted and the names of towns were changed. I had the opportunity in my political career to discuss this with Tim Fischer, a previous National Party member from the Riverina, whose own family endured these kinds of circumstances. It was not here of course. In the United States, although they only incarcerated one per cent of Hawaii's huge Japanese population, 100,000 Japanese in the United States were incarcerated. It was only Reagan's apology in 1998 that put some end to that.
Similar events occurred in this country. Sir Henry Bolte was probably one of the toughest politicians this country has ever produced and was famous for hanging Ronald Ryan. If you go to this country's National Archives and listen to his oral history, he said that throughout his political career he always dreaded that the Australian people would find that he was of German extraction. In the Riverina, in Mildura, and in the Albury area we incarcerated two Lutheran ministers because they might have been pro Nazi. One of them, unfortunately, was a Lutheran convert from Judaism. A person active in Sydney's Jewish community, Josie Lacey, tells the story that, when she arrived here as a Jewish refugee, she and her family were so distrusted that they were not allowed to live on the coastline near Bondi or Vaucluse. They were moved out to Wentworthville because they might otherwise communicate with German submarines.
My local Guilford chemist is of Italian extraction and told the story that throughout the Second World War his father was forced to work for the Catholic Church from Monday to Friday, basically for nothing, and only come home on weekends. We have a situation in this country where, in times of conflict, minorities are doubted and there is no respect for their citizenship of this country. Amongst the 7,000 incarcerated during the Second World War, 1,500 were nationals and were actually British citizens.
There are other things I had not heard about. One thing I came across when reading about this resolution was an incident that I was previously unaware of. At Cape Bedford in Northern Queensland, because the local Lutheran pastor was a German, they moved 250 Aboriginal Australians to Cooktown and Cairns because we could not trust them because they had a Lutheran pastor. Of those people, 28 died in the first month because of the change of temperature and climate and, eventually by March 1943, 60 of them had perished. We pride ourselves on multiculturalism and, of course, we are a world leader. But these are things we should be very careful of. As I say, in times of frantic nationalism and patriotism, these kinds of mentalities and situations arise.
Another incident in this country happened in Broken Hill, where rioters burned down the German club and a large number of other properties connected with Germans. It is a situation that is very damning. The major writer in this area is Klaus Newmann author of In the Interest of National Securityand another article on Wolf Klaphake entitled, A Doubtful Character. Another incident is that a person, who was an inventor, was victimised by the German Nazis and managed to get to this country. However, he fled and got to this country to be liberated and then we incarcerated him because he might have been a Nazi sympathiser.
The dimensions of this are that people were ostracised by their neighbours, by the people that they went to school with and by their friends, they were marginalised in society and not trusted, their lives were basically torn asunder and careers that they might have aspired to were destroyed. All of these are things that are very integral to the resolution that the member for Fremantle has moved. I recommend it very strongly to the House and congratulate her endeavour in an important issue. | <urn:uuid:82113004-cede-412e-8974-139cdd639776> | 2 | 1.507813 | 0.023679 | en | 0.9848 | http://parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;db=CHAMBER;id=chamber%2Fhansardr%2F451460c0-4232-4947-a01e-30cf827a8e30%2F0352;query=Id%3A%22chamber%2Fhansardr%2F451460c0-4232-4947-a01e-30cf827a8e30%2F0111%22 |
Take the 2-minute tour ×
I'm considering the case in which I have an invention and the subject matter is a (computer) web server. So: the first exemplary embodiment would be based on a “web server”, but since I don't want to introduce any limitation I'd like to claim it just as “server”. I draft a broadest possible claim and a state:
1. A server apparatus and system comprising …..
If I want to claim a “server” then I have to specify the term “server 100” in the “DETAILED DESCRIPTION” section and in the drawings. But actually the first exemplary embodiment would be a “web server”.
So, in the detailed description of the embodiment, should I:
• include a “web server 101” in the block diagram of the “server 100” like in FIG.1? I've seen this way done in some patent application but I think that the it's not so logically true that something that is a “web server” is included in a “server”. It would be like saying that a “red car” it's included in a “car”.
FIG. 1
• Having a first embodiment with a "server" like in FIG. 1 and the having a second embodiment with a "web server" like FIG. 2? Doing this way the first embodiment wouldn't be fully specified then because it's actually possible to do a detailed description just for the second embodiment, for which is possible to enter in other limitating details like HTTP protocol and so on. So isn't there the possibility that the first enbodiment would be considered as not fully detailed and therefor not valid as a basis for claim terms?
FIG.1 and FIG. 2
Is there any other suggested way to do what described above?
share|improve this question
If server is hardware then it makes sense that the web server is in a server. You may be overthinking this from a point of view of not enough specialized knowledge to keep your head from spinning. – George White Aug 29 '14 at 0:35
Thanks, you're right but maybe the server case is a particular case. I was wandering what is the right procedure to "zoom" a claim term in the detailed description of an embodiment, let's take another example: server "response" --> web server "http response". – marcoe Aug 29 '14 at 7:09
In the second part of your answer are you hinting me to not to think it in a too specialized way or that I'm not considering it from an enough specialized point of view? (Sorry, I'm not sure about English interpretation of your sentence). – marcoe Aug 29 '14 at 7:11
Sorry for the hint. I think software patenting is a very complex and rapidly changing field and to get a reasonable patent application filed you will either need help from a patent practitioner or to learn a lot about patents. – George White Aug 29 '14 at 20:21
Claim 1 says a method for XYZ comprising: – George White Aug 29 '14 at 20:21
1 Answer 1
In this framework example XYZ is a result. Results are not patentable but methods of achieving results can be patentable. Some claims are independent claims which means they stand alone and do not represent a narrowing of some other claim. Dependent claims refer back to a preceding claim that is then narrowed by adding more steps or more details to the existing steps (assuming a method claim). Dependent claims can depend from a a chain of dependent claims back up to a top-level indecent claims. Each dependent claim takes on all the limitations (you call features of the claims in the path from it back to the independent claim. If 3 depends on 2 which depends on one, then 3 has requires all the limitations in 1, 2, and 3 to be infringed. In another case 2, 3, 4, & 5 might each depend directly from 1. in that case there is no interaction between 2 and 3 or 3 and 4. Arbitrary trees can be constructed. However, nothing in the dependent claims can remove a limitation in something above it and must do some narrowing. If claim 1 says a vehicle system having an airplane and a X horsepower engine, claim 2 can not say "The vehicle of claim 1 where the vehicle is a boat rather than an airplane." To change remove the airplane requirement would take a new top level independent claim. This is one area of patent law that is logical.
Claim 1 A method for XYZ comprising:
(a) receiving a request;
(b) transmitting a response.
Claim 2 The method for XYZ where the request is an HTTP request.
Claim 3 The method of claim 1 where the request is an FTP request.
Claim 4. The method of claim 3 further comprising:
retrieving the requested data from a data base;
and where the transmitting includes the retrieved data.
share|improve this answer
I see software patenting is a complex field but nevertheless it should have its logic. So my question was not directly regarding how to claim but how to define claim terms. – marcoe Aug 30 '14 at 12:47
First I suppose that a term used in claim has to be defined in one embodiment in the detailed description section. If this it's true, than two opposite needs arise: (1) to use the widest terms in the (first) broadest claim and (2) to define in an embodiment every term that is used in claims. But the broadest claim should be somehow “embodiments-free” to not to be limiting, given that every embodiment is a further specification of it: the broadest claim should be a set containing all the possible embodiments and not an embodiment itself. – marcoe Aug 30 '14 at 12:48
So in your answer you have a broadest claim on XYZ and some subsets of it (further specifications). But how is XYZ founded? – marcoe Aug 30 '14 at 12:50
I was considering in my question two ways: (A) – having in the detailed specification a first embodiment of a generic “server” that also provides (includes) a “web server” and then actually basing the description on the features of the “web server”; (B) having a first embodiment for a generic “server” that would have very vague description and then having also a second embodiment for a “web server” with a more detailed description. Both (A) and (B) the ways are finalized to give a foundation to the broadest term “server” used in the first and broadest claim. Which is the better one (if any)? – marcoe Aug 30 '14 at 12:50
And again: if I have a broadest claim (1) that define a “server comprising: (a) receiving a request and (b) sending a response” then I would introduce a new claim (2) in order to add a new feature (c) to the claimed server and not to extend the yet claimed feature (a) or (b) to different embodiments. Indeed that the claim (1) should be yet broad enough to include all embodiments. Just to put in other words: let assume that adding a new feature is sort of vertically extending a claim and instead adding new embodiments is sort of extending horizontally. – marcoe Aug 30 '14 at 13:08
Your Answer
| <urn:uuid:e887052f-5d35-446a-a6ee-b71a0c264140> | 2 | 1.804688 | 0.634181 | en | 0.925148 | http://patents.stackexchange.com/questions/10142/defining-broadest-term-for-claiming?answertab=oldest |
Free phonics games
Online resources for teachers and parents to help children,young people and adults learn to read.
Phonics Flash cards android app all phases
Phonics flashcards phase 3,4 and 5 android app
apk File [110.1 KB]
Don't forget to download Adobe Air for android.
Wordsearch Advance words
apk File [8.4 MB]
Phase 3,4 and 5 Wordsearch game and android app Click here
Phase 5 drag and drop game
free phonics games,Phase 5, drag and drop, game, phonics, activities,free, interactive
Online Flashcards
free phonics games and flashcards phase 3 blends online
Click here for more flashcards
Free phonics games phase 3 animal theme
A dreadful pronunciation poem
I take it you already know
Of tough and bough and cough and dough?
Others may stumble, but not you,
On hiccough, thorough, lough and through?
Well done! And now you wish, perhaps,
To learn of less familiar traps?
Beware of heard, a dreadful word
That looks like beard and sounds like bird,
And dead: it's said like bed, not bead -
For goodness sake don't call it deed!
Watch out for meat and great and threat
(They rhyme with suite and straight and debt).
A moth is not a moth in mother,
Nor both in bother, broth in brother,
And here is not a match for there
Nor dear and fear for bear and pear,
And then there's dose and rose and lose -
Just look them up - and goose and choose,
And cork and work and card and ward,
And font and front and word and sword,
And do and go and thwart and cart -
Come, come, I've hardly made a start!
A dreadful language? Man alive!
I'd mastered it when I was five!
What are phonics?
The reading system of English words from an Anglo-Norse heritage is fairly systemised and follows distinct rules. Most of these words are monosyllabic (one syllable) e.g. dark, come, fish, great. More complex words of a Latin origin are harder for the reader to decipher, often containing neutral vowel sounds such as effect, affect, accountant etc.
Phonics teaching is divided into three main levels –Phase 3, Phase 4 and Phase 5. In each of these Phases the pupil is introduced to groups of letters which represent a sound in English. The two consonants put together for example in Bl is called a blend. Other blends include ch, th, gr , gl. When two vowels are combined, for example ea they are called vowel digraphs.
Phase 3 includes all single letters of the alphabet commonly known as CVC ( consonant vowel consonant) for example cat and CVCC (consonant vowel consonant consonant) for example back or grin. Phase 3 is approximately what an average child would cover and be able to read at reception level. Phase 3 also includes some vowel digraphs (two vowels together) ee, oo both as in fool and good and ai. In addition pupils are introduced to a list of tricky words and although they are relatively simple they do not necessarily follow the spelling rules pupils have learnt previously. Pupils are first encouraged to be able to read these words without being able to spell them.
Phase 4 introduces the following vowel digraphs lake,like, stone, cube,or, air, igh, old, wind, wild,ay, ea, bow, saw, blew,her, car, fir,oa,oi, oy, ou ,er at the end of a word e.g.hammer and double letters e.g. address and bible. As for phase 3 there is a list of irregular words to recognise and learn. This is the level of achievement for year 1.
By now at phase 5 pupils should be quite competent at deciphering phonics and should show that they are able to build on what they have learnt previously and be able to read new combinations of letters. This level of recognition is what would be expected of a year 2 pupil again with the emphasis on reading and not on spelling the words. The list of irregular words is greater and includes longer words with more syllables. Unexpected spellings such as treasure, television, bacon, tuba who, when, wash are introduced and the remaining vowel digraphs of blue, tie and hoe appear.
Letters and Sounds- Information on teaching from the DCFS | <urn:uuid:69f922b6-b291-47a9-8aaa-21a2022f86cd> | 3 | 3.15625 | 0.070341 | en | 0.926176 | http://phonics-flashcards.jimdo.com/ |
Majority won't have access to antivirals in pandemic but generic drugs could help prevent deaths
Jun 12, 2009
That's the conclusion reached by an extensive review and analysis by immunisation expert Dr David Fedson, published online by and Other Respiratory Viruses within hours of the World Health Organization declaring a pandemic.
"For example we still don't understand why so many young adults died in the 1918 pandemic, while the death rate for children was much lower. I believe this is because researchers have focused on studying the actual virus rather than how these particular hosts - the children and young people - responded to the virus.
"Most of the world's population lack realistic alternatives for confronting the next pandemic and urgent research is vital. Otherwise people everywhere might be faced with an unprecedented public health crisis."
"Research suggests that giving patients anti-inflammatory and immunomodulatory agents such as statins, fibrates and glitazones could help to regulate the cell signalling pathways in patients who have suffered acute lung injury, a common problem with influenza" he says. "They can also help to reverse the cellular dysfunction and cell damage that accompanies multi-organ failure.
"Statins are commonly used to lower cholesterol and prevent heart disease - but have also been shown to be effective in reducing hospitalisations and deaths from pneumonia. Fibrates modify fatty acid metabolism and glitazones reduce blood glucose levels in type 2 diabetes. All of these drugs modify the cell signalling pathways involved in acute lung injury and multi-organ failure. Moreover, they are affordable generic drugs that are widely available even in developing countries."
"In all likelihood, people in these countries won't be able to obtain supplies of pandemic vaccines or they will get them too late" he says.
"At a scientific meeting in 2008 we heard that all of the people who developed bird flu in Indonesia, and did not receive antiviral treatment, died. This observation is terrifying. If this particular virus were to develop efficient human-to-human transmission we could see a global population collapse.
"Swine flu has only recently emerged so we have had less time to study its effects. But any influenza pandemic is cause for great concern regardless of what strain it is."
International influenza expert and journal editor Dr Alan Hampson says that it is essential that the focus on swine flu doesn't distract health professionals from the risk still posed by bird flu, which is continuing to rise, particularly in Egypt.
"Wouldn't it be a terrible irony if bird flu suddenly achieved the ability to transmit readily in humans, possibly aided by widespread infection of and that fact that most of our resources are focussing on that" he says.
Dr Hampson, who has worked extensively with the World Health Organization and is an influenza advisor to the Australian Government, says that the WHO recommended that all countries should develop preparedness plans.
"However, web-based evidence suggests that only 45 countries have produced plans so far and these tend to be the more developed countries, who may be less vulnerable" he says.
Source: Wiley-Blackwell
Explore further: Researchers monitor for next novel influenza strain
add to favorites email to friend print save as pdf
Related Stories
Pandemic mutations in bird flu revealed
Jul 09, 2008
Pandemic closer but not inevitable: Lancet
Apr 28, 2009
A pandemic of swine flu has edged nearer but the threat can be avoided if governments and individuals join in limiting the contagion, The Lancet said in an editorial on Tuesday.
Recommended for you
UTMB collaboration results in rapid Ebola test
8 hours ago
User comments : 0
Click here to reset your password. | <urn:uuid:c1c7f341-dde6-4fbd-933a-51450252967b> | 3 | 2.90625 | 0.022349 | en | 0.952221 | http://phys.org/news164033981.html |
Take the 2-minute tour ×
If I have a line of copper wire (lets say $\textrm{1 meter}$ long, $\textrm{1 mm}$ thick) and one end is a flattened disk of copper about the size of a quarter, and I apply a lot of heat to it (I'm talking $800\,^{\circ}\textrm{C}$) will the entire line be heated to the same degree? I mean what temperature will the unheated end be after, say, a minute? Can it too reach $800\,^{\circ}\textrm{C}$ degrees over time?
I'm going to start by asking the question, what would happen if the cool end were at $400\,^{\circ}\textrm{C}$ ? In this case, the rate of heat flow from the hot to the cool end would be $$ \begin{align} \frac{k A}{l}\Delta T &= \frac{400\,\textrm{W/mK}\cdot \pi\,(0.0005\,\textrm{m})^2}{\textrm{1 m}} \cdot 400\,\textrm{K} \\ &= 0.1257\,\textrm{W} \end{align} $$ The radiative transfer from the copper to the surroundings, which I'll call air at $20\,^{\circ}\textrm{C}$, will follow from the Stefan-Boltzmann Law. For the copper, the radiative flux is $$ \begin{align} \sigma \, T^4 &= \left( 5.67 \times 10^{-8}\,\textrm{W/m}^2 \textrm{K}^4 \right) \left(673\,\textrm{K}\right)^4 \\ &= 11632\,\textrm{W/m}^2 \end{align} $$ For the back flux from the air (disregarding any convection), you have $ \left( 5.67 \times 10^{-8}\,\textrm{W/m}^2 \textrm{K}^4 \right) \left(293\,\textrm{K}\right)^4 = 418\,\textrm{W/m}^2, $ so if the disk has a radius of $1\,\textrm{cm}$ and is two-sided, its surface area is $ 2\,\pi \,\left(0.01 \,\textrm{m}\right)^2 = 6.28 \times 10^{-4}\,\textrm{m}^2, $ and the $\textrm{NET}$ radiative loss is about $ \left( 11214\,\textrm{W/m}^2 \right) \left( 6.28 \times 10^{-4}\,\textrm{m}^2 = 7.04\,\textrm{watts}. \right) $ Evidently the radiative loss would be a lot more than the conductive gain, so the equilibrium temperature of the "disk" end is going to be considerably lower than $400\,^{\circ}\textrm{C}$.
Next I tried a formal solution, but I didn't like the result(!) so I'll just see what happens if the disk temperature is $100\,^{\circ}\textrm{C}$:Radiative loss $ = 5.67 \times 10^{-8}\cdot \left( 373^4 - 293^4 \right) = 680\,\textrm{W/m}^2 $, total of $0.427\,\textrm{watts}$. Conductive gain $ = 400\,\pi\, 0.0005{^2} \cdot 700 = 0.220\,\textrm{watts}. $ So, as a whole, if the copper wire were thicker(lets say ten times so, placing it at $1\,\textrm{cm}$), would it enable it to reach $800\,^{\circ}\textrm{C}$ as a whole?
share|improve this question
For given parameters (L=1 m, r=0.5e-3 m), the side surface of the wire $2 \pi r L$ ~ 3e-3 m$^2$ is larger than the disk area ~ 0.6e-3 m$^2$ so the disk is not relevant. A thicker wire would be at higher temperature since the heat flux along the wire scales as $r^2$ and heat losses through the side surface scale as $r$. – Maxim Umansky Dec 31 '13 at 7:06
1 Answer 1
up vote 1 down vote accepted
No calculations are necessary, the First Law of Thermodynamics tells us the unheated end will never reach 800 degrees Celsius. As long you as there is heat loss along the length of the wire the unheated end can never reach the temperature of the heated end.
share|improve this answer
Your Answer
| <urn:uuid:5d70b375-6bd6-47a1-8643-fb67ceb29e78> | 3 | 3.09375 | 0.992827 | en | 0.795697 | http://physics.stackexchange.com/questions/91863/is-this-heat-calculation-equation-correct |
Theology: Ch. 5 for test
29 terms by lucy_bray
Create a new folder
Advertisement Upgrade to remove ads
Describe conditions in the Roman Empire during the 5th century
The collapse of the empire inaugerated a period of decline in the West as the old world passed away and confusion reigned as the basis for a new order had yet to coalesce.
How did Ulphilas influence the Germanic tribes and Christianity?
The Goths, Burgundians, Lombards, and Vandals were all converted to Arian Christianity because of Ulphilas.
What is the oldest Germanic document?
Ulphilas' translation of the Bible into Gothic
What was the key for the Church in converting the Germanic tribes?
How did Germanic invasions change Christian attitude in the 5th century?
They were discouraged at the fall of the empire, but they took refuge from the chaos in a form of asceticism, from which monasticism emerged
How is Christian monasticism unique?
Men and women who enter the monastic life model themselves on Jesus Christ by dedicating themselves to prayer and joyful penance.
How is the eremitical life structred?
In the early days, monks withdrew themselves to the desert to lead a contemplitive life.
What two orders have taken much of their inspiration from eremitical monasticism?
Carmelites and Carthusians
What was a common problem of early hermits in Eygpt?
Word of their holiness spread, so people went to join them, ruining their solitude
Name three effects of monasticism on Europe.
Recovery and evangelization of rural society, intellectual growth, and civilization of the Germanic peoples
What are the chief qualities that lend the "Rule of St. Benedict" to harmonious religious life?
The Rule is lauded for its spirit of peace and love, as well as moderation in ascetical life.
What three vows are accepted by Benedictines?
Poverty, chastity, and obedience
Who was St. Scholastica?
The sister of St. Benedict
What was St. Scholastica's main accomplishment?
Founding of the first Benedictine convent
In what ways was Pope Gregory I a historical marker?
His papacy is often used as a marker for the beginning of the Medieval Age
What title did Pope Greg I use during his papacy?
"Servus servorum Dei", or "servant of the servants of God"
What title did Pope Gerg I reject for the Patriarch of Constantinople?
Ecuminical Patriarch
Why is the Koran written in Arabic?
Because it is believed to be the language of revelation
Who is the common ancestor of Judaism, Islam, and Christianity?
What is the position held by Jesus in Islam?
Holy Prophet
What is the name of the creedal statement of Islam? (Hint: one of the 5 Pillars of Islam)
The Shahada
Who suffered under the expansion of Islam?
The Christian land which they conquered
At what battle were the Muslims defeated, and who defeated them?
The Battle of Tours, by the Franks
Why does Jerusalem hold religious significance in Islam?
The Dome of the Rock was built there
What did JP2 say about Islam?
While Islam gives some of the most beautiful names to God, God for them is distant, and Islam is not a religion of redemption.
What is required 5 times daily for Muslims?
Prayer, in the direction of Mecca
What is the name for the pilgrimage to Mecca? (Hint: one of the 5 Pillars of Islam)
The Hajj
What is the name for the alms required by Islam for purification (Hint: one of the 5 Pillars of Islam)
The Zakah
What is the name for the Islamic holy month? (Hint: one of the 5 Pillars of Islam)
Please allow access to your computer’s microphone to use Voice Recording.
Having trouble? Click here for help.
We can’t access your microphone!
Reload the page to try again!
Press Cmd-0 to reset your zoom
Press Ctrl-0 to reset your zoom
Please upgrade Flash or install Chrome
to use Voice Recording.
For more help, see our troubleshooting page.
Your microphone is muted
For help fixing this issue, see this FAQ.
Star this term
You can study starred terms together
NEW! Voice Recording
Create Set | <urn:uuid:2067489d-8693-41b3-a5cb-97afe424a88c> | 4 | 3.546875 | 0.936851 | en | 0.9389 | http://quizlet.com/1442751/theology-ch-5-for-test-flash-cards/ |
Advertisement Upgrade to remove ads
sphere of light; the surface layer of the sun; the layer visible to the observer on Earth
the sun's atmosphere that has a colorful layer of gases that surrounds the sun's photosphere; visible only through a monochromator
a vapor blanket that surrounds the chromospheres and extends to about 249,000miles from the surface of the sun; visible only during a total solar eclipse
solar eclipse
the blocking out of the sun's light that occurs when the moon comes between the sun and the earth
lunar eclipse
the obscuring of the moon that occurs when the earth comes between it and the sun; the moon is temporarily darkened as it passes through the earth's shadow
air pressure
the pressure of atmospheric or compressed air; the total weight of air on some particular place on earth
the layer of gases surrounding a planet or moon;
a form of oxygen with three atoms of oxygen in its molecule instead of the two atoms found in in molecules of normal oxygen gas
a light visible on the horizon on especially dark and clear nights; cast by ions created when solar rays strip electrons from electrons from atoms of gases in the upper atmosphere
small solid pieces of rocky debris traveling through space; many meteoroids are fragments broken off asteroids
an atom that is either positively charged because it has lost some or all of it electrons, or negatively charged because it has picked up one or more extra electrons
solar wind
a steady stream of subatomic particles speeding outward from the sun and through space at a million miles per hour
aurora borealis
the northern lights
aurora australis
the southern lights
only time the corona is visible
during a solar eclipse
93,000,000 miles
How far away is the sun?
21% of the air is made of this element
Layer of Earths's atmosphere where weather occurs
Where do sunspots occur?
on the photosphere
the first phase after a new moon
meteoroids burn up in this layer
the first planet discovered in modern times. It was discovered by Sir Wm Herschel
largest planet with the big red spot
Earth's twin and brightest planet in the sky
planet with 7 rings and over 60 moons
royal blue planet where the wind blows hardest
only planet suitable for life.
planet that is rotates on it's side with 27 moons
planet closest to the sun
the red planet.
Please allow access to your computer’s microphone to use Voice Recording.
Having trouble? Click here for help.
We can’t access your microphone!
Reload the page to try again!
Press Cmd-0 to reset your zoom
Press Ctrl-0 to reset your zoom
Please upgrade Flash or install Chrome
to use Voice Recording.
For more help, see our troubleshooting page.
Your microphone is muted
For help fixing this issue, see this FAQ.
Star this term
You can study starred terms together
NEW! Voice Recording
Create Set | <urn:uuid:f962d647-0848-4947-bac6-ca2e6d04f2ba> | 3 | 3.25 | 0.94852 | en | 0.856237 | http://quizlet.com/4118580/5-science-chapter-7-flash-cards/ |
African rainforests: past, present and future
Yadvinder Malhi , Stephen Adu-Bredu , Rebecca A. Asare , Simon L. Lewis , Philippe Mayaux
The rainforests are the great green heart of Africa, and present a unique combination of ecological, climatic and human interactions. In this synthesis paper, we review the past and present state processes of change in African rainforests, and explore the challenges and opportunities for maintaining a viable future for these biomes. We draw in particular on the insights and new analyses emerging from the Theme Issue on ‘African rainforests: past, present and future’ of Philosophical Transactions of the Royal Society B. A combination of features characterize the African rainforest biome, including a history of climate variation; forest expansion and retreat; a long history of human interaction with the biome; a relatively low plant species diversity but large tree biomass; a historically exceptionally high animal biomass that is now being severely hunted down; the dominance of selective logging; small-scale farming and bushmeat hunting as the major forms of direct human pressure; and, in Central Africa, the particular context of mineral- and oil-driven economies that have resulted in unusually low rates of deforestation and agricultural activity. We conclude by discussing how this combination of factors influences the prospects for African forests in the twenty-first century.
1. Introduction
In recent decades, there has been a surge of interest in tropical forests, as there is increased appreciation of the rich biodiversity they host and the many roles they play in the functioning of the Earth system at local, regional and global scales. Of the world's major tropical forest regions, most research and policy attention has focused on the Amazon region, the world's largest tropical forest bloc, and to a lesser extent on Southeast Asia, the third largest tropical forest region. By contrast, the world's second largest tropical forest region, the tropical forests of Central and West Africa (termed the Guineo-Congolian region) have been relatively neglected. This has been for a number of reasons, including challenging and fragmented politics, civil conflicts and logistical as well as infrastructure challenges. Nevertheless, there is an extensive amount of research activity in the African rainforest zone that has rarely been compiled in a single interdisciplinary volume.
This review paper synthesizes the insights emerging from the theme issue on ‘African rainforests: past, present and future’ of Philosophical Transactions of the Royal Society [1]. This issue has explored African humid forests from a variety of perspectives, including archaeology, palaeoecology, ecology, climate science, satellite remote sensing, global climate-vegetation modelling, international policy and social science. All tropical continents and regions are different in their climate, ecology, human context and contemporary pressures. This special issue presents a synthesis of knowledge (including several commissioned reviews) and cutting-edge research from the least understood of the major tropical forest regions. It highlights the many unique features of the African forest biome.
First, it is necessary to acknowledge the limits of this theme issue. It focuses on the humid tropical forest biome (the ‘rainforests’). There are many other valuable biomes in Africa, most notably the extensive dry open forest, savanna and grassland biomes, and also mangroves, afro-montane ecosystems and others. All of these are valuable and fascinating ecosystems, which for reasons of brevity are not covered in this volume. Second, many of the analyses presented dwell on the largest biogeographic unit that accounts for 95% of African rainforests, the Guineo-Congolian forests of West and Central Africa. We particularly focus on Central Africa (technically the Congo–Ogooué Basin and contiguous forests, hereafter termed the Congo Basin for brevity), which accounts for 89% of African rainforests. The submontane forest patches of East Africa and the unique forests of Madagascar receive less detailed attention here. However, a number of studies, including those on deforestation [2,3], woody encroachment [4], climate [5] and forest structure and biomass [6], do extend beyond the Guineo-Congolian forest zone.
2. The extent, biomass and structure of African rainforests
The issue presents two new, ground-breaking analyses of the extent, biomass and structure of the African rainforest realm. Mayaux et al. [2] present a new wall-to-wall map of humid forest cover in Africa, for the year 2005, at 250 m resolution, dividing the region into four classes: lowland rainforest, swamp forest, rural complex (10–30% tree cover and more than 50% croplands) and other land cover (remaining savanna, croplands, etc.). The analysis makes key an advance over previous maps by using the twice-daily passes of the Terra and Aqua satellites, with their on-board moderate resolution imaging spectrometer. The frequency of the satellites' overpasses enables sufficient collection of cloud-free data, even over the cloudiest regions.
Compared with the American and Asian tropics, there have been very few systematic regional studies of even the basic attributes of African forests such as biomass, species diversity and structure. Lewis et al. [6] present the first large-scale analysis of how the biomass and structure of old-growth African forests vary across the region. Presenting data from 260 forest plots, they find that African forests have a mean above-ground biomass of 395.7±14.3 Mg dry biomass ha−1, and the mean increases to 429 Mg dry biomass ha−1 in Central Africa. This is much higher than the mean value of 289 Mg ha−1 reported for Amazonia, and comparable with the mean 445 Mg ha−1 reported for the famously high biomass forests of Borneo. Another key feature is that African forests have a lower mean stem density (426 ± 11 stems of 100 mm diameter or more, compared with around 600 in Amazonia or Borneo), consisting of proportionately many more larger-sized trees than other continents, and fewer small trees. The reasons for these differences are mysterious. Does this imply much lower disturbance rates in Africa that favour longer-lived trees, or perhaps higher net primary productivity? Does the (until recently) pervasive presence of forest elephants, gorillas and other megafauna reduce the number of small stems and increase nutrient availability [7], and do these factors also explain the difference between higher biomass Central Africa and lower biomass West Africa? Does some aspect of the physical environment promote high biomass, for example, via lower rainfall, possibly sunnier conditions or deep weathered soils? Another striking feature is that on average the plots seem to be increasing in biomass over time [8,9], which may be a response to global atmospheric change, or else a legacy of ongoing recovery from past human or climatic disturbance or a combination of the two. These agents of change are discussed further below.
3. A history of disturbance
Another key feature of African rainforests is the heavy influence of past disturbance, both through climate change and human activity. The rainfall regime of much of humid Africa is close to the lower threshold of rainforest viability [10,11]; hence, small changes in rainfall or in intensity or duration of the dry season can cause fairly large-scale changes in rainforest or savanna cover.
Willis et al. [12] summarize the history of climate change in Africa since the Last Glacial Maximum, 12 000 years ago. The theory that tropical rainforests retreated into small refugia during the cold arid phases of the Ice Age was first proposed for Amazonia [13] but has largely fallen out of favour for that rainforest region [14] with evidence that continuous, albeit drier, lowland forest persisted across Amazonia throughout the Ice Age. In Africa, by contrast, there is strong evidence of rainforest retreat during dry periods [15] that leaves a legacy in the current distribution of African vegetation species, with slow-dispersed species (e.g. those that are dispersed ballistically rather than by animals or wind) still expanding slowly out of refugia [16]. Much of the present-day rainforest belt in West and Central Africa would have been savanna–grasslands in the Ice Age, and the rapid climate fluctuations would have favoured species that dispersed and colonized rapidly. The low plant diversity of most African rainforests compared with Amazonian or Asian counterparts under similar modern climates may be a legacy of this history of frequent climate and vegetation change [17].
After the Ice Age, much of Africa experienced a rapid transition to a climate warmer and wetter than present, and these conditions broadly persisted over the period 11 000−4000 years BP, an interval referred to as the African humid period [12]. The higher rainfall and deeper penetration of the West African monsoon in this period was associated with solar orbital forcing, with more intense heating of the Sahara over the northern summer driving a stronger monsoon. At the peak of the African humid period (11 000−8000 years BP), the African vegetation zones extended much further (up to 400–500 km) to the north of their present position, and the Sahara was criss-crossed by lakes, rivers and inland deltas [12,18].
As solar orbiting forcing gradually shifted, the climate in much of West and Central Africa shifted to a drier regime around 4000–2000 years BP [12]. This period is associated with retreat of forest cover and expansion of savannas, but also expansion of human activity. Oslisly et al. [19] present a comprehensive synthesis of human presence in Atlantic Central Africa. There is some debate about the extent to which human activity caused forest retreat, or simply took advantage of the opening up of the forest margins in a drier climate [2023]. Both factors are probably important, but the timing of forest retreat favours climate change as the underlying driver of forest retreat.
Oslisly et al. [19] paint a compelling picture of waves of human settlement, punctured by periods of population collapse. The Late Stone Age began in Central and West Africa around 40 000 years BP, and persisted until around 3500 years BP, when stone-working hunter–gatherers gave way to Neolithic farmers migrating into the region from the Sahel. These early farmers practiced rudimentary slash-and-burn, and took advantage of climate drying and forest fragmentation to penetrate into the forest block. Soon after (approx. 2800–2500 BP), a wave of Iron Age farming spread south. These farmers increased rapidly in numbers and likely had a profound effect on the forest: with their iron tools, they had the potential to slash-and-burn more extensively in the forest, and the process of iron smelting also required great quantities of charcoal. They also probably increased fire frequencies in savannas, favouring the spread of savannas into forests. Iron Age settlement peaked around 1900 years BP, but was followed around 1600–1000 BP by an extensive population crash, suggesting that Atlantic Central Africa was almost devoid of people at this time. The cause of this crash is still a mystery, but disease has been suggested, with sleeping sickness or Ebola as possible culprits. As the human population crashed, the forest likely expanded and recovered.
From 1000 years BP, a second wave of metalworkers settled into the forest region. This new expansion peaked around 500 BP, but then suffered a new crash. This time the Atlantic slave trade may be the likely cause of collapse, both by direct removal of people and by causing abandonment of poorly defended agricultural areas. The subsequent colonial policy (in some countries) of forced resettlement of rural communities along transport routes also contributed to reduced agricultural activity across broad expanses of forest. Again, the forest biome probably expanded and increased in biomass. Many of the forest tree species that characterize African forests from West Africa to Congo are characteristic advanced succession species (such as the Aucoumea klaineana (okoumé) that dominates parts of Gabon), indicating a forest age of a few hundred years. Another example, Gond et al. [24] used a satellite-based analysis of seasonality to describe a broad band of low diversity, disturbed and semi-deciduous forest in the Sangha River Interval, and area of sandier soils and possible lower rainfall that divides the western Congo Basin from the eastern Basin. In this presently low population density region, there is abundant evidence of past cultivation, and past retreat of the forest [19].
Hence, the story of Africa's forests has been one of climatic change even throughout the Holocene, and altering levels of direct human impacts. The ecology and biodiversity of Africa reflect this history of disturbance, with lower levels of wet-affiliated species than expected for the current climate [17], and a high abundance of large trees that disperse and grow rapidly and perhaps represent an advanced stage of succession (ecological ‘scar tissue’ spreading over a disturbance [25]). These species may therefore be more adaptable to contemporary climate change and disturbance, but this hypothesis remains to be rigorously tested.
The African forests have been through phases of dense human settlement, and also population collapse, as in Central Africa 1600–1000 BP and more widely since 500 BP with the Atlantic slave trade and forced resettlement. Currently, there is very high population pressure in some regions (e.g. across West Africa, and the northern and eastern margins of the Congo Basin), but low population pressure in the central and western regions of the basin. The human populations of the American tropics also witnessed a massive (disease-associated) collapse after European colonization of the Americas. However, pre-collapse human impacts in much of Amazonia away from drier fringes and river margins are likely to have been less [26,27] despite some arguments to the contrary [28]. In particular, the Americas never experienced an Iron Age, which in Africa produced more effective tools for clearing forest and greater demand of wood fuel for smelters. The climate of African rainforests is also, on average, much drier than most of the tropics, and therefore more suitable for a greater variety of successful agricultural crops.
4. Deforestation: patterns and causes
Deforestation is the most visible and prominent agent of contemporary change in tropical forests. Globally, rates of deforestation at the start of this century were around 5.4 million ha yr−1 [29], contributing about 1.30±0.24 Pg C of emissions to the atmosphere, about 15±3% of global human-caused CO2 emissions [30]. The rates of deforestation in Amazonia and Southeast Asia are fairly well defined [29,31], because national reporting capacity in countries such as Brazil and Malaysia is high, and in area terms much of it is driven by medium-to-large-scale clearing for cattle ranching and soya bean (in Amazonia) and oil palm plantations (Southeast Asia), both of which are fairly easy to detect by satellite. In Africa, there has been much more uncertainty about rates and patterns of deforestation, because national reporting capacity has been low, and the mode of deforestation is primarily small-scale clearing by subsistence farmers. This requires high-resolution imagery such as those provided by Landsat.
Mayaux et al. [2] present a new analysis of deforestation across the African rainforest zone over two 10-year intervals 1990–2000 and 2000–2010, taking advantage of the recent free availability of the Landsat archive to conduct high-resolution time-series analyses of 10 × 10 km areas distributed on a regular grid at each integer latitude and longitude intersect. They estimate that African rainforest net deforestation rates were 0.59 million ha yr−1 between 1990 and 2000, and decreased to 0.29 million ha yr−1 over 2000–2010. This rate is four times smaller in absolute terms than that in Latin America, and the proportional rate is also smaller (0.3% versus 0.4%; [32]). The absolute rates in Asia are also three to four times larger [31]. Hence, from a global perspective, much of Africa is still a relatively low rainforest deforestation continent, contributing only about 11% to global gross deforestation. This reflects the almost complete absence of agro-industrial scale clearing in Africa, which accounts for around 55% of global tropical rainforest deforestation in the 2000–2005 period [29]. At a finer scale, there are hotspots of deforestation in West Africa and the fringes of the Congo Basin, and no evidence of a decline in deforestation in Madagascar. Rates of deforestation seem to rise when rural population densities rise above around 10 people km−2, or the area of cropland rises above 10%, in a 10 × 10 km2 square. Hotspots of deforestation are also found around transport networks and close to cities, including areas suitable for agriculture within 5 h travel time to major markets, and wood fuel and charcoal provisioning within 12 h from a city. These are very similar to the ‘waves of degradation’ documented in East Africa, which conform to economic models based on the value of products derived from tropical landscapes [33].
The pattern of low and declining rates of deforestation in African rainforests (and particularly in Central Africa) is somewhat surprising and warrants further exploration. Rudel [3] presents a national-scale analysis of the drivers of deforestation. He notes several key features of human ecology that are important in the context of Africa: the population has long been much poorer and more rural than that in Latin America but now has the highest rates of urban population growth (3.7% per year between 2000 and 2005), much of the urbanization is occurring without industrialization (e.g. domestic energy supply is mainly wood fuel), and almost all of the significant Congo Basin countries that account for almost 90% of Africa's rainforest cover have large extractive oil and mineral industries (with Cameroon being a notable exception). This extraction of oil and minerals triggers economic booms that set off cascades of economic side effects (such as high labour costs, less competitive agricultural exports, more food imports) that retard agricultural expansion, accelerate urbanization and slow rates of deforestation in remote, rural regions. Deforestation is particularly concentrated in peri-urban areas and along transport routes [2], to meet the demands of the concentrated and growing urban population. Consistent with this hypothesis, both Rudel and Mayaux et al. [2] find that the lowest rates of deforestation tend occur in mineral- and oil-rich nations. It is worthwhile to consider the particular circumstances of the rainforest giant, the DRC, separately in this analysis, as some different dynamics may be occurring there. It accounts for over 50% of African rainforest area [2], has high oil and mineral receipts (around 90% of exports) and yet low rural income and has experienced high levels of political instability that is still pervasive in some regions. Albeit from a very low baseline rate, it is experiencing increasing levels of deforestation [34], in particular around its rapidly urbanizing zones.
5. Selective logging
Logging is a more cryptic agent of change, challenging to monitor by satellite. In popular usage, logging is often confused with deforestation, partially, because, in temperate regions, logging is often associated with clear-cutting of low diversity forests. In the high diversity African tropics, there are typically only one to two timber trees harvested per hectare [35], and it makes little economic sense to clear cut forests remote from markets. This contrasts with Southeast Asia, where the dipterocarp-rich forests yield many more timber species, and are much more intensively logged (around 10–20 trees per hectare) and heavily damaged by logging. In principle, a tropical forest can be logged with low impact, and given sufficient recovery time before return logging, this could be sustainable, although the definition of what is sustainable logging is much debated [36,37]. Much of the damage associated with logging comes from collateral damage to other trees, and through the opening up of logging tracks and platforms. Research from Amazonia [38] has also suggested that logging can often be the precursor of complete deforestation, and also opens up the forest to a drier microclimate and increased fire vulnerability.
In Central Africa, much of the lowland rainforest area is apportioned to long-term logging concessions, about 44 million ha [39]. Over the past two decades, there has been a noteworthy shift from unsustainable ‘mining’ of timber resources to at least an aspiration for sustainable management and conservation of timber resources. Between 1990 and 2002, most countries in Central Africa redefined their forestry laws to make management plans for logging concessions mandatory. Over 18.6 million hectare of forests in Central Africa now have a management plan [40].
Gourlet-Fleury et al. [35] present results from a long-running (24 year) logging experiment in the Central African Republic to assess the effects of disturbance linked to logging and thinning on the structure and dynamics of the forest. They find that the forest responded fast to the logging disturbance, rebounding back to near original levels of biomass after 24 years, but the volume of timber did not recover so quickly. With the logging of one to two trees per hectare removed every 30 years, the forest appears to maintain biomass and function. However, the most highly valued timber species take much longer to recover, so in terms of maintenance of timber species such logging is still effectively ‘mining’ of timber resources.
Mayaux et al. [2] and Rudel [3] explore the possible linkages between logging and subsequent deforestation. They find little evidence of logging leading to deforestation, either at national or local scales. One key difference from Amazonia is the lack of an active colonization frontier with pressure to clear logged forest. The loose network of logging tracks, owing to the light exploitation density, combined with the low population density, does not provoke the critical conditions for deforestation, except around a few concessions in the DRC. They do however greatly facilitate hunting pressure, which can have knock-on consequences on forest structure and biomass and discussed below.
6. Woody encroachment
Many analyses of forest cover change are biased towards detecting and quantifying forest loss. Much less attention has been focused on areas where forest cover seems to be increasing. Such increases in forest cover (or woody encroachment) may be caused by locally reduced direct human pressure (e.g. where rural depopulation is underway), particularly changes in the fire regime, by increasing rainfall (as suggested by most climate models for much of eastern Africa: [5,41]) or by rising atmospheric CO2 concentrations favouring tree growth over grasses, and enabling tree saplings to better escape grass-associated ‘fire traps’ in mixed tree–grass regions [42].
Mitchard & Flintrop [4] present an analysis of this phenomenon of woody encroachment, by both conducting a systematic review of published instances of woody encroachment in Africa, and by conducting a long-term (1982–2006) study of satellite-derived normalized difference vegetation index (NDVI). This is a vegetation index that is a measure of greenness, across the mixed tree–grass systems of Africa. In the peak dry season when grasses are yellow, NDVI is likely to be a good proxy for tree cover. They find evidence of increased dry season NDVI across the north fringes of the rainforest biome, but general loss of greenness in the southern African Miombo woodland regions where there is heavy degradation pressure. All documented studies of woody encroachment coincide with an increasing trend of NDVI at the same locale (though not always significant).
In South Africa, there is substantial evidence that the woody encroachment is associated with increased CO2 rather than with rainfall or with reduced land use pressure [42]. It is plausible that a similar factor is driving the encroachment in more northern regions, although recent increases in precipitation following drying in the 1970s and 1980s may also be a factor [43]. Lewis et al. [8] have noted that intact old-growth forests in Africa have been increasing in biomass, possible in response to higher CO2, something also noted by Gourlet-Fleury et al. [35] in the control plots in their logging experiment. Hence, the consistent phenomenon of CO2 fertilization may be increasing forest biomass in biomes ranging from wet rainforests to dry tree–grass systems. Thus, calculations of the carbon sink across the tropical forest biome should include that from woody encroachment, not yet well estimated, in addition to that already estimated for rainforests.
7. Defaunation
Other aspects of human-induced change in rainforests are less visible to satellites than deforestation, but more pervasive. Among the most important of these is hunting, whether for bushmeat or for international trade in wildlife parts such as ivory and skins. Abernethy et al. [44] present a review of the factors driving hunting in Central Africa, and a synthesis of data on the direct impacts of hunting on the functioning of forest ecosystems.
Africa is the origin continent of humanity, and African rainforest species have evolved in the presence of hominid hunters and their ancestors. This is in sharp contrast to, for example, the Americas, Madagascar and Australia, which suffered a major loss of megafauna around the time of arrival of the first human hunters [45]. As a result, until decades to centuries ago, African rainforests have held abundant populations of megafauna such as elephants and large primates. However, modern hunting pressure, driven by increased commercial trade, has resulted in depopulation and local extinction of many larger rainforest mammals. The remaining forest reserves of West Africa, often structurally intact but devoid of medium and large fauna, are ‘empty forests’. Now, the most striking rates of defaunation are in Central Africa. A massive increase in ivory poaching has led to 62% of Central Africa's forest elephants being lost within a decade (2002–2011), with no sign of a decline in the rate of poaching [46]. Large declines are also occurring in ape populations, which declined by 50% in a study across Gabon over 1984–2000 [47]. At a less dramatic but even more widespread scale, bushmeat hunting affects around 178 species in Central Africa, with over half these species deemed threatened by this hunting [44]. It has been estimated that over 6.5 million tonnes of bushmeat per annum are extracted from tropical forests, rising by over 100 000 tonnes each year, of which 90% is extracted from African forests [48,49].
These massive rates of defaunation represent a direct loss of some of the most unique and valuable aspects of African forests. They also have knock-on effects, such as loss of top predators such as leopards as their prey become rare, effects on forest structure, seed dispersal and recruitment [50] and possible changes in nutrient diffusion through the forest [7]. These can lead to shifts in the tree species composition of the forest [51]. The loss of high wood density animal-dispersed species may also lead to a reduction of biomass in the forest [44]. By contrast, Lewis et al. [6] make the suggestion that by ‘culling’ small trees, megafauna might actually increase the preponderance and lifetime of larger trees and the overall biomass of rainforests, which may explain why African rainforests have higher biomass than Amazonia, where the megafauna went extinct around 10 000 years ago. This might partially explain why Central African rainforests have higher biomass than West African ones, and why Africa has higher biomass than Amazonia, where the megafauna went extinct around 10 000 years ago. Consistent with this, a recent report tracking forest responses, which a forest was hunted to silence, showed a dramatic increase in tree stem density [52]. The ongoing expansion of logging trails into the remotest parts of the African rainforest realm is likely to only intensify hunting pressure and its consequences, as accessible areas have much higher bushmeat extraction than inaccessible areas [44].
8. Climate and climate change
A number of papers in this theme issue explore the theme of climate change, a looming but poorly quantified and understood threat to the African rainforest realm. James et al. [5] and also Zelazowski et al. [11] conduct an in-depth analysis of multiple climate-change projections for the region. All climate models agree that the African rainforest regions will warm, with a mean rate across models of 0.8–1.0°C per 1°C of global warming. Hence, the African rainforest realm is likely to warm by 3–4°C over this century under the most likely emissions scenarios. How tropical organisms will adapt to these this warming remains unclear—some studies argue that tropical organisms are particularly sensitive to warming because of their limited seasonality and interannual variability in temperature [53,54]. Similarly, in the lowland tropics organisms are required to travel long distance to maintain a constant temperature as mean air temperature increases [55]. Conversely, some species more than three million years old may have experiences warmer temperatures than those expected in the coming decades and may therefore be able to adapt [56].
For rainfall, a simple average of change in total annual rainfall across models suggests little overall change in the central and western regions of equatorial Africa, and wetting of the eastern regions. Such an analysis would be deeply misleading, however. When metrics of the seasonality of rainfall are considered, most models predict intensified dry seasons in the western Congo Basin. Outside of the West and Central African core, most models suggest that East African forests will get wetter, but Madagascar's forests will experience increasingly severe water stress. The length of and intensity of the dry season is likely to matter more than the mean rainfall in determining the viability of rainforests [11]. On the other hand, a number of models still suggest wetting of western equatorial Africa, hence averaged across models there is little overall trend. James et al. [5] delve further into the patterns and mechanisms that distinguish the ‘dry’ models from the ‘wet’ models. They show that drying is associated with enhanced sea surface warming over the Indian Ocean, together with warming in the tropical North Atlantic causing a northward shift in the inter-tropical convergence zone. Warming in both these regions drives enhanced uplift of air in these regions, which in turn drives intense subsidence in western equatorial Africa. The review identifies these ocean-driven atmospheric circulation shifts as the likely atmospheric mechanism to the potential threat to the rainforests of the western Congo Basin, and it is a clear research priority to identify whether the dry models (the majority) or the wet models are more plausible. There is clear evidence that Congo Basin and West African forests have retreated substantially because of climate change even as recently as 3000 years ago [19,20], suggesting that the climate change-induced forest retreat is certainly feasible.
Probably the greatest challenge in understanding and projecting the future climate of the African forest realm, and in particular the Congo Basin, is that the current climate is so poorly observed and understood. This deficiency is of global significance, because the Congo Basin is the second most important convective ‘engine’ of the global atmospheric circulation after the Maritime Continent (insular Southeast Asia and the surrounding waters) and is also the region of highest lightning strike frequencies on the planet [57]. In transition seasons (March–May and September–November), the Congo Basin dominates global tropical rainfall [58]. Despite this importance, the Congo Basin is poorly studied because of the dearth of available ground-based climate observations from the region, particularly in the past few decades [58]. The number of rain gauges reporting from this vast region declined from more than 50 in the period 1950–1980, to less than 10 over the period 1990–2010 [58]. Much of this decline is driven by political instability: for example, only three meteorological stations from the DRC reported to the Global Telecommunication System in 2013, despite this country accounting for over half of Africa's rainforests. Washington et al. [58] analyse a range of model and satellite observation products for the region, and show there is little agreement in estimates of the distribution and total quantity of rainfall in the region (e.g. whether the western or eastern Congo Basin is wetter). The datasets differ by a factor of at least two, and in absolute terms by at least 2000 mm per year.
Clearly, there is a need to re-establish a ground meteorological observation system over the region. In the short term, however, Washington et al. [58] suggest that even a short-period, intensive observation campaign that targets the atmospheric circulation and water transport could yield vital insights and clues to the true rainfall regime of the region. Such a campaign would combine radiosondes, weather radar and aircraft data collection, as done for West Africa past decade by the African monsoon multidisciplinary analyses (AMMA) programme [59]. There is also evidence that the tropical African climate is more predictable in the short term (seasonal and interannual scale) than many other regions because of its strong links to Atlantic and Indian Ocean sea surface temperatures [60].
Beyond changes in a mean state of the climate, increasing variability and seasonality are also key indicators of a changing and unstable climate system. An important signature is the intensity and frequency of extreme drought events. Asefi-Najafabady & Saatchi [61] analyse time series of precipitation in Central and West Africa using time series of ground interpolated observations, and, since 2001, tropical rainfall monitoring mission satellite rainfall estimates. They show substantial interannual variation in dry season intensity (particularly in 2005, 2006 and 2007) as part of a general drying trend in West Africa and the northern Congo Basin that started in the 1970s.
But are such drought events linked to global climate or just a signature of natural variability? The drying observed in West Africa since the 1970s is thought to be largely driven by natural variability [62], but is there also a signature of global warming either in this region or across the Congo Basin? Attempts to answer such questions are termed attribution studies. They require very large ensembles of climate model simulations, so that the statistics of a rare event such as an extreme drought can be estimated some with confidence. Otto et al. [60] present an attribution analysis of drought events for the Congo Basin, the first attempt at an attribution study for a tropical forest region. To generate the large ensemble of climate model outputs, they use output from the weather@home project, which runs climate models distributed across volunteers' computers. They compare simulated current climate with a counterfactual current climate in the absence of greenhouse warming, to see if there is a statistically significant change in the probability of modelled drought events that could be attributed to global warming. In the case of the Congo Basin, they find no conclusive evidence that changes in drought frequency can be attributed to global warming. All such attribution studies rely on the model's ability to simulate the region's climate, and in the case of Central Africa, the lack of observational data make it very hard to be sure what the actual climate is.
If there is a mean drying, or if variability becomes more extreme, then how will African forests respond to such events? From one perspective, African forests are more vulnerable to climate change because they sit close to a rainfall threshold that favours savannas over rainforests. The previous extensive contractions of the rainforests in the Ice Age and the drier periods of the Holocene indicate that extensive climate-driven rainforest loss is clearly quite possible. On the other hand, as long as the rainfall threshold is not crossed, the disturbance-adapted nature of many African forests may favour more resilience than their Amazonian or Asian counterparts [12,19]. Willis et al. [12] suggest that the palaeoecological record in Africa indicates a nonlinear and spatially heterogeneous response to a change in climate: little change until some threshold is crossed, then a rapid shift in the ecosystem.
There is evidence of at least short-term resilience to drought. Fauset et al. [63] explored the response of Ghanaian forests to the drying event in the 1970s. They found that after 20 years of exposure to a drier climate, the composition of the forests shifted towards more drought-adapted, deciduous species, and plot-level biomass actually increased over time. This suggests a relatively rapid response in forest composition to perturbations, which gives some resilience to the persistence of the forest biome. However, in areas which burnt during a particularly severe drought year in 1983, there was a transition to non-forest habitats that persists to the present day. This highlights the importance of interannual variability, fire and human pressures in mediating the resilience of forests to climate change.
At a larger scale, Asefi-Najafabady & Saatchi [61] examined changes in canopy structure and moisture content using microwave satellite data. They found that, despite substantial drought events over the past decade, there was little evidence of lingering effects on the forest canopy. This contrasts with a study using identical methods in Amazonia [64], which found a lingering multi-year response in the canopy of southwest Amazonia. Amazonia has had a more stable climate history than tropical Africa, and may have higher abundance of species more accustomed to a stable climate, and hence more vulnerable to climate change.
A key factor to consider is the rapid rise of atmospheric CO2, by the end of this century to levels probably not experienced for over 50 myr. This is a feature of twenty-first century change that makes it very different from previous changes during the Ice Ages. High atmospheric CO2 can have a number of consequences: it can directly stimulate plant photosynthesis and growth, it can increase the water use efficiency of plants in dry environments, and it can increase the competitive advantage of trees over tropical grasses, enabling woody encroachment and expansion of forests into savannas. The ongoing increase in woody biomass in old-growth African rainforests reported by Lewis et al. [8], and Gourlet-Fleury et al. [35] and the woody encroachment of savannas reported by Mitchard & Flintropp [4] may both be signatures of rising CO2, although other factors such as recovery from disturbance cannot be completely ruled out.
Fisher et al. [65] present a synthesis of outputs from nine global vegetation models for the African rainforest biome. They find that most models predict a current increase in biomass in response to CO2 fertilization of about 0.5 Mg C ha−1 yr−1, very close the field-observed value of 0.6 Mg C ha−1 yr−1 reported by Lewis et al. [8]. The models suggest the rate of absorption of CO2 flux has been declining in West Africa and the northern Congo because of the recent dry decades. Most models predict the carbon sink to continue to decline in strength over this century: the exact timing and magnitude of that decline causes model predictions to diverge as the century progresses.
Hence, the picture emerging from field monitoring, palaeoecological work, satellite monitoring and ecophysiological modelling is one of resilience to occasional droughts, moderate warming and moderate long-term drying. This contrasts with the high rates of climate-change-related extinction produced by bioclimatic envelope models. This resilience is perhaps greater than for other major tropical forest regions because of Africa's history of climate change and human pressure: the potentially most vulnerable species are already extinct. However, the palaeoecological data also point to there being thresholds (perhaps closer in African forests than in other places), when the forest can convert rapidly to savannas. Such thresholds may be amplified when forest–climate feedbacks are considered, where the retreat of the rainforest reduces recirculation of water to the atmosphere and thereby reduces rainfall, which causes further forest retreat. Within the constraints of our limited understanding of the climate circulation of the region [58], there is evidence that the rainforest plays a critical role in recycling precipitated water back to the atmosphere [66,67], and regional climate model simulations suggest that extensive loss of forests causes not only large decreases local rainfall but also affects the African monsoon and hence rainfall patterns across many more arid, marginal areas of this generally dry continent [68].
9. Threats and opportunities
(a) The threat of dramatic change
The unique demographic and economic circumstance of much of humid tropical Africa presents the opportunity to possibly raise incomes without extensive clearance. However, a major threat to the forest biome is the potential of a shift to commercial agro-plantations. If poorly planned, then these industrial croplands could lead to extensive loss of forests as witnessed in Southeast Asia, and in soya bean regions of Amazonia, especially when combined with poor governance. On the other hand, the international commercial organizations involved in operating such plantations may be more amenable to land use planning and international governance pressure. In particular, any discussion of the future of the African forest biome as a whole needs to consider the unique importance and particular circumstances of the DRC, with its combination of high mineral resources, high rates of deforestation, fragile governance and civil conflict.
Climate change is a big unknown, because we simply do not know enough about the nature of the present and future climate of tropical Africa, nor about the response of vegetation to that climate change. We know it will almost certainly be 3–4°C warmer in the forest realm by 2100 that atmospheric CO2 concentrations will be much higher, and that rainfall variability will probably be greater. Critically, we do not know what the likely patterns of rainfall change will be. James et al. [5] have identified that climate models tend to cluster towards two opposing modes of rainfall change, associated with sea surface temperature patterns in the Indian and Atlantic Oceans. The more negative mode would imply substantial drying and retreat of rainforests across Central and West Africa, as seems to have happened a mere 3000 years ago [15], although in the twenty-first century context the high levels of CO2 may to some extent partially offset the negative impacts of drying on forest vegetation. In the opposing ‘positive’ mode of climate change, there would be increased rainfall across Central and West Africa resulting in substantial woody encroachment in the savanna biomes.
(b) Opportunities for rainforest conservation
There are new opportunities emerging to support the conservation of forests. The opportunity attracting the most attention is through international climate change mitigation funding for reducing emissions from deforestation and forest degradation and conservation of forest carbon stocks, sustainable management of forests and enhancement of forest carbon stocks (REDD+). That is, payments would be made based on verified reductions in carbon emissions in a country relative to an agreed emissions baseline in the absence of any mitigation activity. This represents a potentially transformative opportunity for a more sustainable future for Africa's forests, but also faces a number of challenges, foremost of which are an international agreement on climate change mitigation targets and strategies, and effective implementation of the international flow of funds. At the national and subnational level, REDD+ also brings huge challenges owing to limited national capacity to implement and monitor the complexities of land and tree tenure arrangements, and ensuring safeguards for forest peoples and for biodiversity protection. The slow pace of progress towards any international agreement on climate change is resulting in disenchantment with REDD+. Nonetheless, progress is being made in setting up national architectures and developing the capacity to implement REDD+ and monitor and report on its progress, as well as in implementing local-scale pilot projects. Maniatis et al. [69] examine the current flows of REDD+ finance in Congo Basin countries and the status of national engagement and capacity to implement REDD+. They find that there has already been at least US$550 million of REDD+ financing committed or disbursed, with the biggest recipients being the DRC (41%), regional entities (34%) and Cameroon (15%). There is a large disparity in terms of preparedness of REDD+, with the two largest African rainforest countries (DRC and Gabon) making substantial progress, and many of the lower rainforest area countries trailing behind. Such building of capacity and awareness of national forest resources may already be having an impact on slowing down deforestation and degradation activities in the region.
In addition to these challenges at international and national level, one key challenge for the implementation of REDD+ or other forest sustainability efforts is successful engagement by local communities in the management of their forests. There are lessons to be learnt and models to be adopted in the light of decades of experience in tropical forest conservation in Africa. Asare et al. [70] highlight and reviews one promising model that has emerged from Ghana: the community resource management area (CREMA). This mechanism authorizes rural communities and land users to benefit economically from their forest resources, while allowing them to manage the resources in ways that are founded upon traditional values and are compatible with local by laws and national legislation. As a mitigation strategy, the CREMA has the potential to solve many of the key local-scale challenges for REDD+ in Africa, including definition of boundaries, small-holder aggregation, free prior and informed consent, ensuring permanence, preventing leakage, clarifying land tenure and carbon rights, and enabling equitable benefits-sharing. Successful implementation of REDD+ at local scale would require African government support for CREMAs or similar mechanisms, and motivation by communities to integrate such systems within their traditional values and natural resource management systems.
We started this synthesis by pointing out that all tropical rainforest continents and regions have their unique history, climate, ecology, governance and patterns of utilization that make the prospects of their tropical forests particular to that region. This theme issue has highlighted the multifaceted uniqueness (or ‘exceptionalism’) of the African humid forest biome. Other tropical forest regions may also have some of these features in common with Africa, but this particular combination of features characterizes much of the African rainforest biome. Key among these are
• — the extensive history of climate variation, biome expansion and retreat, and human interaction with the biome,
• — the relatively dry and cool climate when compared with other major tropical forest regions,
• — the relatively low plant species diversity and yet extremely high animal biomass (in the non-heavily hunted forests),
• — complex patterns of customary and state land tenure built in long histories of low-level forest exploitation,
• — the dominance of selective logging, small-scale farming and bushmeat hunting as the major forms of pressure on the rainforest biome, in contrast to the agro-industrial pressures that dominate in the tropical Americas and Asia,
• — the particular context of mineral- and oil-driven economies of Central Africa, resulting in unusually low rates of deforestation and agricultural activity, and
• — the particular governance and poverty challenges and civil conflict context of the African tropical forest giant, the DRC.
We conclude by highlighting some research needs that have emerged from this theme issue.
First and foremost, there is a need to build up relevant scientific capacity in African countries, building upon and supporting existing research institutions, and creating mechanisms for scholarships for study in both Africa and overseas, and for research and fellowships and infrastructure development. Sparsely funded African research institutions should have greater access to funds which can support African research priorities.
There is a need for basic ecological understanding of the African rainforest biome, which lags far behind that of the Americas and Asia. This includes understanding productivity, species distributions, drought- and temperature-sensitivity and interactions with climate and soils. This requires investment in selected intensive studies sites combined with more extensive distributed networks of study sites, both integrated and standardized with parallel efforts in other rainforest continents. Attention should also be focused on the more rare rainforest formations, such as swamp, montane, inselberg and mangrove forests, as well as the widespread wet and dry forest biomes.
If anything, our understanding of Central African climate is even weaker than our understanding of its ecology. There is a pressing need to rebuild the climate monitoring network in Africa, linking in the weather stations to global reporting networks and integrating with satellite observations. There are also opportunities for major advances in insight through a short-term, targeted campaign that focused on understanding the circulation and moisture flow patterns across Central Africa in the dry season. Similar campaigns in South America such as the Large-scale Biosphere–Atmosphere programme in Amazonia (LBA) have transformed our understanding of the Amazon region and greatly enhanced local capacity. We also need to understand the interactions between the rainforests and the atmosphere: to what extent is the rainforest a critical source of recycled water and a driver of atmospheric circulation across this generally arid continent, and how would rainfall patterns change in the event of substantial retreat of rainforests?
There is also a need to better understand and monitor the particular processes of change that predominate in Africa. What is the nature of small-scale farming and how well can it be monitored by satellites? What are the threshold levels of defaunation pressure that drive species to local extinction or functional irrelevance? How does the spreading network of logging trails affect defaunation? Can we better understand how the structure of intact African forests is affected by historical change and by ongoing defaunation? What are the interactions between forest degradation, fire risk and climate sensitivity?
In social science research, there is a need to better understand the unique interactions between urbanization, poverty, oil and mineral extraction, economic growth, wood fuel use and agriculture in an African context, and in the particular context of each African country. We need to understand the political ecology and economy at scales ranging from international through to community and individual farmer.
There is also a need to better apply the research to practical management and conservation of the rainforest biome. Of critical importance is finding platforms and processes that enable sustainable, effective management of African forests and forest resources that operate at multiple scales (local to national) and accommodate state and traditional values and norms. When large-scale agro-industry arrives in the African rainforest biome (as it has begun to do already), how can its negative impacts be minimized and development and conservation goals both be met? What are optimal strategies to protect the African fauna and the many ecosystem services that they provide? Can we reliably identify the areas of the rainforest biome most vulnerable to climate change, and manage and plan accordingly to maximize resilience of the biome, its species and its human inhabitants? Can effective mechanisms (at both international, national and local scale) be developed that bring benefits to African communities from conserving and sustainably managing the rainforest biome and the many ecosystem services they provide?
This short synthesis has highlighted many surprising aspects of the African rainforest biome, and how different it is in many aspects from other, perhaps better understood, rainforest regions. It has also highlighted how little we know and how much there is still to discover. There are reasons for concern, such has the heavy levels of defaunation and the potential impacts of climate change, and reasons for hope, such as the low rates of deforestation and the possible resilience of rainforests species to climate change. We call on the research and policy communities to redouble efforts to give this fascinating rainforest continent the attention it so richly deserves.
Creative Commons logo
View Abstract | <urn:uuid:24f6d236-fc18-4b57-88dd-11faccbc8233> | 3 | 2.578125 | 0.033749 | en | 0.939305 | http://rstb.royalsocietypublishing.org/content/368/1625/20120312.full |
Essay info
Contact us
Instant Quote
Type of work:
275 words/page
Price: $0
Make an order
Our Prices
14 days per page
10 days per page
6 days per page
3 days per page
2 days per page
24 hours per page
12 hours per page
6 hours per page
3 hours per page
Similar essays
Write my essay on
"Sir Issac Newton"
Sir Issac Newton
Chad Boger
Sir Isaac Newton
Sir Isaac Newton was born on December 25, 1642 in Lincolnshire by Hannah Newton and Isaac Newton Senior. Sir Isaac Newton was a baby who got off as a weak little child grew up to have one of the best minds of all time (Minds Of Science, J. Anderson pg 7).
Sir Isaac's first college was Trinity College at Cambridge. He wasn't as successful in his first few years at this college because he preferred to go into his own studies and interest instead of the professors. Newton's turning point came when he stopped attending Trinity College and transferred to Cambridge University. In years of 1665-1666 Newton became most successful for his inventions, mathematics and philosophy. In mathematics Newton conceived his "method of fluxions" (infitesimal calculus) and he laid the foundations for his theory of light and color. He also made great philosophies about the planetary motion and gravity. Newton's achievements contributed to great things we build and base on today.
He made many fundamental contributions to analytic geometry, algebra, and calculus. Newton's optical research began during his undergraduate years at Cambridge. In 1666 Newton performed a number of experiments on composition of light ( Newton's main discovery was that visible white light is heterogeneous or colors that can be considered primary. Newton demonstrated that prisms separated colors. All this linked into his famous experiment called "experimentum crucis". This experiment just proved that colors going through first prisms can't go through another twice (
Newton's most famous book is the "Principia". This masterpiece is divided into 3 books. Book I talks about Newton's laws of motion.
1.Everybody continues in its state of rest, unless it is compelled to change by inertia.
2. The change in motion is proportional to the force impressed and is made in the direction of the straight line in...
"Sir Issac Newton." Mar 04, 2015
View full text
essay on Sir Issac Newton
2 (425 words)
Plagiarism level of this essay is: 83%
Order this essay written from scratch Right Now!
Good day! Write my essay "Sir Issac Newton" for money please!
Leave Your comment
Enter symbols below:
Captcha Image | <urn:uuid:2c2c1853-d36c-4d10-b5c5-d7e8507e5307> | 3 | 2.734375 | 0.029805 | en | 0.944013 | http://samples.essaypedia.com/papers/sir-issac-newton-46975.html |
Tuesday, December 20, 2011
Isolating a Common Surname
Wiley & Martha (Maloney) Johnson
Some families get all the luck - THEY have ancestors with unique surnames that sound and spell like they are supposed to, which means finding them is a census search can be a little bit easier. With a common surname like Johnson, Smith, or others, doing searches for quality data nuggets is always a challenge. But there are a few simple tricks to help isolate, identify, and confirm your genealogical ancestors, cousins, inlaws and outlaws.
Several techniques I use are documented in other posts, like using full and partial phonetic sounds and "exact" spellings.
The key is always to narrow down the list from hundreds to a few dozen, until you have only a few left. Of course, the more you know about the subject, the better, but along the way, you need to make some assumptions, keep good notes on all family names and never jump to conclusions.
Always try to narrow your search to location, and by year, familial connections, sex, race, and any other biographical information you may have. If you have done this, also consider trying to isolate your subject through other similar document searches. For example, if doing a Federal census surname search and still find too many people with the same name, take a peek at the State census records. Since most states did their census records +5 years after the Federal, you may find a common family name gets smaller.
If you anchor onto a location, a surname isolation becomes much easier. If you are certain, then it becomes a puzzle for you to solve, by building up the connecting pieces. BEWARE you do not identify a different family, which can happen the further you go back and why it is always best to keep a list of the paths you went and the reasons you found a dead end or the next line.
Another method that I use is to isolate by an inlaws family. For example, if you are searching for Johnson and John Johnson married Sarah Butterfield, search for the Butterfields. They are likely in the neighborhood and can help you find a name through married.
If you have any questions, feel free to contact me.
A cousin in law (not out).
No comments: | <urn:uuid:5fea8e15-1a20-41f9-9196-420068b1d995> | 2 | 2.40625 | 0.042049 | en | 0.940913 | http://sandersfamilia.blogspot.com/2011/12/isolating-common-surname.html |
(CNSNews.com) – The U.S. Justice Department says it has reached a settlement with the Sacramento (California) Public Library over a trial program that lets patrons borrow Barnes and Noble NOOK e-book readers.
A DOJ official told CNSNews.com it interviewed a woman who could not participate in the library’s e-reader program due to her disability and concluded that the program had violated the ADA.
Amy Calhoun, an Electronic Resources Librarian at the Sacramento Public Library who helped launch the ebook reader project, said she was unaware of any objections from a blind person regarding the program. “I have not heard of a specific complaint directly from a patron,” she told CNSNews.com. “But I do know that patrons who are part of the statewide Braille and talking-book program do get in touch with us for audio books.”
Your tax dollars, hard at work.
All due respect to the blind, but why should their disability limit access to e-book readers for those who aren’t blind? Per the article, this library has both Braille and audio books available for the blind. It’s not like anybody is trying to discriminate against the blind.
What’s next? Should we make stairs illegal because people in wheelchairs can’t use them? Should we make cars illegal because blind people can’t drive them?
Related posts
• http://Sayanythingblog.com The Whistler
I think the ebook revolution is going to make things all the better for people with all kinds of disabilities. My Kindle will read to me. Having all books available electronically should make all kinds of other aids more effective.
• SigFan
I don’t know about the Nook but I suspect it has the same capability as my Kindle has which is an option of text-to-speech reading and you can use audiobooks through it as well. This is just one more example of the government poking their nose where it isn’t needed and doesn’t belong.
• kevindf
Have they never heard of audio books?
• Bat One
I will take foolishness like this (and the Obama DoJ) seriously when someone can explain what politically correct moron decided that drive-up ATMs need to have instructions in Braille.
• gregb999
The panels are made at the same factory and are often interchangeable with the walk up models. I know a couple of banks near me have the exact same ATMs both in the bank entrance and in the drive-thru. It’s only what surrounds them that is different.
• tony_o2
While you are correct that the ATMs themselves are usually the same parts, it isn’t the reason why drive-up ATMs have braille. Drive-up ATMs are also accessible by people not driving, therefore the ADA mandates that they have braille and audio instructions.
I did work for a bank that had a non-compliant drive up ATM. It did not have audio capability, and it would have cost too much to replace it with a new one. To accommodate the blind customers, they installed a cheaper walk-up ATM in their lobby entrance and put a braille sticker on the drive-up that instructed people to use the other one.
An ADA compliance officer still wrote them up for the non-compliant drive-up. He told them that it was still discriminating against the blind because it was forcing them to use a “segregated” ATM. They ended up taking out the drive-up ATM and now everyone has to go inside to use the ATM.
• gregb999
Ah, thanks for that info. I never knew most of it. Just always heard the part about parts being the same.
• AV
ADA was passed in 1990, by Bush I. DOJ has been around for a while too.
How is this Obama’s fault?
• http://Sayanythingblog.com The Whistler
Bush the senior was wrong but perhaps well meaning for passing the ADA. The Obama administration is ridiculous for expanding it in this way.
• AV
How has Obama expanded it? Or did you just make that up?
• leh
I do think hat both AV and Eat This are HUA.
• Eat This
Because a republican did it, we will give them a pass because they meant well. When a democrat enforces it, he is expanding it. Maybe you should be happy about the ADA since you are obviously retrded.
• http://sayanything.flywheelsites.com Rob
Because they’re the ones applying the ADA law in such an absurd and stupid way, perhaps?
• Eat This
You are making it sound like they are twisting the law around to pick on e-readers, but AV is right: ADA has been around for many years and they are just applying the law as it it written.
However, I’d almost be willing to bet that the DoJ and White House websites are not fully compliant.
• AV
Turns out that ADA wasn’t future-proof, there was also a ruling about websites too?
But you are suggesting that Obama and the DoJ should apply the law (even more) selectively? Is that really what you want, partisan and inconsistent law enforcement?
The correct response would be for the ADA to be fixed in some way, but that is out of the Democrats’ control, and do you think that Republicans are going to help out Obama, to fix it?
• tony_o2
I don’t see why the Republicans would be opposed to fixing the ADA laws, if it was a stand-alone bill that they would be voting on.
• Bat One
ADA should be third on President Romney’s list of legislation to repeal, right after he disposes of Obamacare and Dodd-Frank.
• Jay
Idiots. I own several e-readers. Am I discriminating against the blind because I have devices they can’t use?
O! What fools these mortals be!
• http://flamemeister.com flamemeister
Reality discriminates against liberals.
• The Political Informer
And Obama wonders why people dislike him.
Maybe it’s because of this insanity.
• jimmypop
i wonder if this same lady was curious why she didnt get a gold medal in the mens 100m dash. some people just cant do some things. that doesnt mean we ban those things.
• Simon
There are audio and Braille books. Both selections are very limited. Braille books are very large, and not every blind person knows how to read Braille. As sad as that is, it’s reality. Audio books are great, but selection is sometimes quite limited. I have personally found books on Amazon that do not exist in audio format, Braille, or any of the websites for the blind.
There are elevators and ramps for those who can’t use stairs. There is not a specialized ebook reader for the blind that can handle barns and noble or amazon kindle content. If I want to read an amazon book, I have to download it to the computer, hack the DRM and break it, then use a program to convert it to a format my e-reader recognizes. So I think the real question is, “Should blind people be limited in their access to books?”
• tony_o2
Just because there is not a specialized ebook reader for the blind, that can handle barns and noble or amazon kindle content, doesn’t mean that those who are not blind should be limited in their access to ebook readers.
What the DOJ is basically saying is that because blind people cannot use these ebook readers, they should not be available to anyone. Unless you include everyone, you are discriminating against those who cannot use it.
If the library was dumping it’s braille and audiobooks in order to provide these ebooks, then it would be a case of them actively excluding the blind from their services. This could be argued as discriminatory and against the law. But that’s not what is happening.
• bcliff
Blind skeet shooters waiting in the wings?
• Jackass_Jimmy
As I’ve said before… it doesn’t matter when you’re trying to appease the people whose votes you’re trying to buy. And that every moron’s vote counts the same.
I say let’s give everyone one vote for each dollar they pay in taxes. My how things would change!! Let’s hear the libs howl over this one… :-D | <urn:uuid:9a56817c-33e0-4b9a-967f-b15d1b5ea49a> | 2 | 1.570313 | 0.087035 | en | 0.963123 | http://sayanythingblog.com/entry/obamas-department-of-justice-e-book-readers-discriminate-against-blind-people/ |
5 Amazing Innovations that Have Won Edison Awards
Ryan McVay/Getty Images
Introduction to 5 Amazing Innovations that Have Won Edison Awards
5: i-LIMB Hand
For many amputees, prosthetic limbs have traditionally been associated with less than pleasing aesthetic value, as well as limited comfort and functionality. Thanks to continuing strides in technology, the i-LIMB Hand was successfully developed to address these issues. The i-LIMB Hand, developed by Touch Bionics, is the first prosthetic hand available with five individually powered fingers, which allows for vastly improved functionality.
The device functions when electrolodes attached to the surface of skin in the forearm pick up electrical signals generated by muscle movement. When the connection is made between the electrolodes and these signals, the gap is effectively bridged between the remaining limb and the i-LIMB Hand, allowing the user to turn what is often termed a phantom limb into a functional one. This feat of engineering allows the prosthesis to work almost exactly like a real hand, giving amputees the opportunity to enjoy such day-to-day activities that people with two functional hands take for granted.
The night sky is brought to your fingertips by the WorldWide Telescope.
Nicholas Monu/Photodisc/Getty Images
4: WorldWide Telescope
Traditionally, most budding astronomers have had to make do with a run of the mill telescope and a clear night sky to learn more about the universe around them. Worldwide Telescope (WWT) stands poised to forever change these limitations by giving anyone with a computer the ability to explore outer space. Developed using the Microsoft® high performance Visual Experience Engine™, WWT allows people to access the available universe via computer by utilizing data provided by the best information and images from both space and Earth-based telescopes. Users can easily explore stars and planets, watch planets and moons in action as they orbit, and study virtually any other aspect of the solar system made available by the most technologically advanced telescopes out there. The product was designed for the use and enjoyment of people of all ages, children included. Its user-friendly format and the wealth of information it imparts are just a couple of the major selling points for this product that quite literally goes above and beyond.
3: Tide to Go
It's a simple fact that everyone spills -- usually at the worst possible times. A spatter of spaghetti sauce here, a dribble of soda there, and a carefully selected outfit is ruined before you can say "marinara." To the relief of messy eaters everywhere, Tide launched the Tide to Go marker several years ago. The marker/pen, which is small enough to fit in purses, briefcases or even a loose pocket, is designed to save the day when food-related tragedy strikes. In the event of a spill, Tide to Go is best used by first removing excess residue, then pressing the tip of the pen to release the cleaning solvent as needed. Mechanics may not reap the same benefits, however, as Tide to Go is not as successful at removing oil or grease stains, along with discolorations caused by grass, ink or blood. When used correctly, the product (made from perfume and peroxide surfactant) can literally make stains disappear before the user's eyes. There's little doubt that more than one rising executive has averted a coffee-related clothing crisis thanks to this handy, Edison Award-winning product.
Celebrity trainer Harley Pasternak tries out Wii Fit at an L.A. event on Dec. 11, 2008.
WireImage/Getty Images
Traditional video games have gotten a bad rap for encouraging sedentary lifestyles. Wii Fit, a Nintendo product, is paving the road to changing how video game consoles are used. The product has rapidly become popular among people who enjoy exercising, particularly those who find it difficult and inconvenient to visit a workout facility on a regular basis. Coupled with the convenience of exercising from home, Wii Fit also allows users to easily set goals and track progress. The recently launched Wii Fit Plus expands upon the original product's activity capabilities, offering multiple workout modes including yoga, aerobics, balance games and strength training. Variety is also a key factor in Wii Fit's success, with more than 60 different activities for users to choose from.
Developing highly influential and top of the line products is really nothing new for Apple. The mega-company may have topped itself with the launch of the hugely successful iPhone, which quite literally puts the world at your fingertips. Billed as three devices in one, the iPhone is an iPod, Internet device and phone, all with heightened capabilities. The touch screen phone is user-friendly enough for people of all ages and technological persuasions to navigate. The iPod allows users to purchase and play music, television shows and movies on a top-quality screen. The high-speed Internet component of the iPhone and the countless customized applications (maps, recipes, news, music and more) available therein has set the bar higher for competitor phones. The only minor downfall to owning an iPhone is that Apple is constantly updating and improving it, making each model sort of obsolete almost immediately after purchase. For the majority of iPhone fans, that minor inconvenience is nothing compared to the major perks that come with owning this pocket-sized cellular gem.
Related HowStuffWorks ArticlesSources
• Edison Awards. (Dec. 29, 2009).http://edisonawards.com/
• Fisher, Adam. "50 Best Websites 2009: WorldWide Telescope." TIME. Aug. 24, 2009. (Dec. 29, 2009).http://www.time.com/time/specials/packages/article/0,28804,1918031_1918016_1918007,00.html
• The iLimb Hand. Touch Bionics. (Dec. 29, 2009). http://www.touchbionics.com/i-LIMB
• iPhone. Apple.com. (Dec. 29, 2009). http://www.apple.com/iphone/
• Tide to Go. Tide.com. (Dec. 29, 2009).http://www.tide.com/en-US/product/tide-to-go.jspx
• WorldWide Telescope. (Dec. 29, 2009).http://www.worldwidetelescope.org/Home.aspx | <urn:uuid:c8ad9145-04e6-4389-b8a7-56dd713dcb6e> | 2 | 2.34375 | 0.044358 | en | 0.937871 | http://science.howstuffworks.com/5-edison-awards.htm/printable |
Oral History Interview: Jay T. Last
Bookmark and Share
Oral History Interview: Jay T. Last
Interviewed by Craig Addison, SEMI
Jay Last received a B.S. degree in Optics from the University of Rochester in 1951, and a Ph.D. in Physics from MIT in 1956. He was then recruited by William Shockley to work at the Shockley Semiconductor Labs. In September 1957, Last was one of the group of eight who founded Fairchild Semiconductor. At Fairchild, he worked on the first commercial silicon planar transistors, and then ran the R&D group that produced the first integrated circuits. In 1961, Last joined Teledyne where he formed the Amelco division, and served as vice president for technology, overseeing the technical interaction of Teledyne’s large number of divisions. After he left Teledyne in the late 1970s, Last became involved with a number of venture capital activities, and was a founder of the Archaeological Conservancy, dedicated to saving American archaeological sites. He started Hillcrest Press in 1982, publishing books dealing with California art, ethnic art, and the graphic arts.
Craig Addison (CA) of SEMI: Jay, could you start off by talking about where you were brought up and your education experience?
Jay Last (JL): Sure, I was born in Western Pennsylvania and went to school there in a small steel mill town. My father worked in the steel mill there and I was born in 1929—the week the stock market crashed—and so in my first decade, steel was a pretty tough industry to be involved with from my father’s point-of-view. My first 10 years were the Depression and after that there were five years of war, so by the time I was 15 I realized I had seen nothing but depressions and wars. I got a good high school education in this small town and then went to the University of Rochester and got a Bachelor’s degree in optics and had a very heavy physics training—which gave me the background, then, to go on to MIT and get a doctorate in solid state physics. Solid state physics was a relatively new field then. All of the technical developments from the 1930s and the things happening in the war just had this vast amount of physical phenomenon that were available for use in various commercial products and improving whatever we were doing. So my timing was just perfect. I got an education in solid state physics…so I knew the background of the transistor field.
So then I was approached by [William] Shockley when I was finishing my degree. For my doctoral thesis I was working with a very complicated Beckman Instruments spectrophotometer which didn’t work too well, so the Beckman people knew me and wanted me to come to work for them. I said, “The last thing I would ever want to do was have anything more to do with that spectrophotometer.” [Laughs] But that was just the time that Shockley had worked out an agreement with Arnold Beckman and Beckman said, “You ought to go talk to Jay Last,” so Shockley came up to MIT one day and we started talking. I realized that while transistors were not the field of my main choice, I found Shockley a very interesting and intriguing person. The other job I would have considered, of course, was Bell Labs. They wanted me to work there again in the same field of transistors but I made the decision to join Shockley.
CA: And you still have the offer letter that Shockley wrote to you.
JL: This is the letter. I’ll just read the first paragraph. This was when Shockley was still in Southern California at the Beckman headquarters. This was November of 1955. He’d hired then two people. I was the third one he was considering hiring. He said, “You have passed our tests with flying colors and I hope you’ll wish to join my project at the starting salary of $675 a month. We would like you to start as soon as you can after completing the work necessary to obtain your degree.”
So that was the start of it. I still had about four months work to do for my thesis and at that time Shockley hired a lot of other people. To his disappointment, he was not able to hire any of the key staff people at Bell Labs. I was told by the Bell Labs people who said, “You don’t know what you are getting into working with Shockley.” But I thought, well, California is where I want to live and whatever happens, it’s going to be an interesting time so I joined Shockley.
CA: The employment letter talks about tests. What tests did you have to do?
JL: It fit both Shockley’s interests and Beckman’s to give a series of psychological tests about…are you qualified to do this? The sort of test [that asks] “Would you rather be the motorman or the conductor on a streetcar?” That kind of test. I went through all those psychological tests.
CA: What were you actually doing at Shockley Labs in the first six or seven months?
JL: We were just setting up a laboratory, which started out to be a laboratory rather than a production facility, unfortunately, for the long-term…but just developing the semiconductor technology and building equipment. Shockley made two key decisions. One was that we were going to work with silicon and [that] the fabrication technology was going to be diffusion and those were the key decisions that carried on to this day. Shockley maintained good relations with the top brass at Bell Labs so we could see the Bell Labs reports on occasion and so we were up with all the technology. We had access to what Bell’s thinking was on these things. I was never involved in the diffusion but that was a key thing we were working on then. We were working on crystal growers, all these things. I was working on testing some of the four layer diodes. Shockley’s bad decision was to try to make [four-layer diodes]…a lot more complicated device than the transistor. He wanted to invent something new which we realized was not the wisest choice to make.
CA: So did you have much chance to use your optics knowledge at that time?
JL: At that time, no. I never used it in a really deep technical sense. At Shockley, no, I never did that. My main focus was on the solid state physics that I had learned. I had never seen a transistor at that time and didn’t know much about them so I was busy trying to learn how transistors worked, as was the whole group. We’d have little study sessions trying to learn what are the key things we should be focusing on. I wrote a paper with Shockley on some aspects of energy levels.
CA: Were there any particular recollections or demonstrations of Shockley’s brilliance that stood out to you?
JL: Just that every day, every time you talked to him, you could just see this. When I first met him, I had some stumbling blocks in my thesis which was an area he wasn’t terribly familiar with. But I started discussing it with him and he just immediately came up with a very good discussion of what I should be doing. I was just very impressed with him. I was then and I always have been. This guy was extremely bright. He paid a big price for that in that he was not terribly socially adept and didn’t understand what motivated people very well.
CA: People have said that because he was bright, in the scientific sense, that he wasn’t a good businessman and that he made some bad decisions there, i.e., the four-layer diode.
JL: Well, he had no business sense at all—which was not surprising. There was no reason he should have but he got himself in a situation where he was going to need that stuff,. So relationships deteriorated with Beckman and Shockley and that deteriorated our relationship so that individually most of us made the decision we were going to leave. And the story has been told so many times that we realized maybe there was some way the group of us—there was a group of seven of us then—could get a job together and keep doing what we knew we could do [which] was make a transistor, rather than go our separate ways. We would try to find a company to hire us. The Hayden Stone people with Art Rock came out and said, “You should start your own company,” which was just completely foreign to us and it was foreign to the venture capital world. There wasn’t any venture capital really at that time. But anyhow, we worked out an arrangement which led to us leaving and starting Fairchild and with some persuasion we persuaded Bob Noyce to join the group finally after we had got things fairly well set up. So the eight of us went off and started Fairchild.
CA: Looking back, what did you learn from Shockley during that brief period you worked for him?
JL: Well, I gained an understanding of transistors and we learned the basic technology we were going to use. When we left, there were still huge areas of things that hadn’t been done—how you put devices in packages and put leads on them and test them and make reliable, reproducible devices. I don’t think at that place [Shockley Labs] they ever made two devices that were the same.
Gene Kleiner’s father had some relationship with a Wall Street firm with some business he was running. So Gene just wrote a letter to these people. Art Rock was a junior associate there at the time and he had the wit not to throw our letter in the wastepaper basket. He and his boss, Bud Coyle, came out and talked to us. We decided at that time just to start our own company which was a really exciting, unknown thing for us to do. I was the youngest in the group then. I was 27 and the oldest was probably Gene Kleiner who was probably 32, so we were just a young bunch of guys. The agreement was that Hayden Stone would try and find a backer for us and we went down and listed about 20 or 30 companies that would be potential supporters of us to have our own enterprise and all 30 turned us down. The ones that got close, the lawyers would get involved and stopped the deal. And finally Art Rock and Bud [Coyle] got ahold of Sherman Fairchild who was intrigued with this; that turned out to be obviously the deal that worked.
CA: Now, of course, the term “the traitorous Eight” has come about, but according to Emmy Shockley, she said her husband never used that term. Do you know where that came from?
JL: This was some newspaper guy way after the fact. It was at least 10 or 15 years [later] before I had ever heard the term and it just caught on. People seemed to like that.
CA: When you were leaving at the time you weren’t called anything in particular?
JL: No, I don’t know what Shockley called us behind our backs but it was a great shock to him that we did leave but we knew what we wanted to do which was make transistors.
CA: Can you talk about the first few weeks at Fairchild. What were you focused on…just getting a building and getting it set up?
JL: An awful lot of things that we had never had any experience with we had to do. We had to get a building. We had to figure out who was going to do what. The interesting thing, as I look back on it, is the group we had. There were eight of us. We all had different skills but in the group we had all the necessary skills and it was a completely cooperative effort. We had no real boss. We had weekly meetings where we decided what we were going to do next. Bob Noyce and I were involved with a step-and-repeat camera. At the time, I did use my optics [knowledge]. Jean Hoerni was involved with diffusion and he had a great deep, physical insight into a lot of things about the physics of semiconductors. Gordon [Moore] also was involved with diffusion. He made a great contribution…he was the only one that knew how to blow glass so he was making all the jungles for the diffusion and he also was involved with metal evaporation. Sheldon Roberts just went off and got us right into the silicon crystal business. Vic Grinich was the one that really knew what transistors were and what they were used for. He set up all the testing facilities. Julie Blank was in charge of the facilities and also making equipment. Gene Kleiner was a magnificent equipment manufacturer, great machinist. He just loved that. He was good at that. And Gene also started taking over some of the administrative tasks. It’s looked on as a group of eight but we did get some other key people and that’s where the story is not quite accurate. The contributions of the people [outside] of the eight of us never got really recognized very well. Dave Allison, in particular, was a key person involved with the diffusions.
CA: I imagine that having worked together at Shockley and making the mistakes then, you were a pretty well-oiled machine by the time you were at Fairchild and could really get things done a lot more efficiently.
JL: Oh, it was a complete change. Shockley was a micromanager and kept us isolated from each other. He had secret projects with some of us that the others didn’t know about, but here we were working with a group [having] this Monday morning meeting where we would get together every week and figure where we were at. It took a long time to get the facility put together. We started about the first of October [1957] and we had to get power in the building and had to make all the equipment we needed, all the furnaces. About the only thing really we could buy were microscopes. Everything else we had to make ourselves. We were focused on getting into the transistor business. An interesting thing was transistors up to that point had been made individually by various alloying techniques and this was the first time that transistor manufacture was a batch process. The wafers were miniscule by today’s standards but there were still a lot of transistors on them. They were probably, I think, three-quarters-of-an-inch wafers… that was the biggest we could handle.
It was remarkable to me, when I look back on it. We went into an empty building without power and had to build all of the equipment and had to develop all the supporting technology. We had to make crystals. We had to learn how to cut crystals, lap them. That was an area that I could use some of my optics background…on making crystals. We had to build the diffusion furnaces. We had to learn how to make controllable, reproducible diffusions. We had to develop all the technology for putting metal interconnections on them. We had to figure out how to put them in packages, how to put leads on the package, and how to test them, and how to build a device that was going to be a high reliability device and so we did all these things. And I look back on it and I was just amazed. To realize that 10 months after we went into this empty building, we had a commercial product which we announced at Wescon, which showed the way that we were working and cooperating…each one of us depending on the rest of the group to do their part. They depended on me to do my part and everybody else to do theirs. We all did it. We worked together really without too much of an overall leader telling us what to do. The problems we had to solve were obvious and we just solved whatever had to be done.
CA: Jay, could you talk in a little more detail about the work you did in the optics area? You said you and Bob Noyce worked on that in the step-and-repeat camera and then also I believe you did some work on photo resists?
JL: Oh yes. I was working on that. The optics was not a question of making our own optics. We were just buying lenses and I knew enough optics…we wanted to get a matched set for step-and-repeat cameras so Bob [Noyce] and I went to a camera store and I picked out an appropriate set of lenses that were matched. And we decided to use photo resist in order to delineate the areas…Bell Labs had made some efforts there and thought this was just impossible to work with so they never pursued it although they had tried it. But we said we just have to do it and Bob and I worked with Kodak and they gave us the best resists they had at the time and we gradually had a working relationship with them that resists kept steadily improving. The problem was not putting the resist on. The problem was getting the resist off—without destroying the under layers. There were a lot of technical problems and technical setbacks there but, as I said, we just said we are going to use this and we have to make it work and we did. That was true throughout. Everybody else was solving their own problems.
We had to find a metal solution for making a good contacts and Gordon [Moore] tells the story of working with aluminum which he said was Bob Noyce’s thought. I was joking with Gordon about that and I said, “I just think you were doing it alphabetically,” and Gordon said, “Yeah, that may be but I left out Argon.” He had the wit not to do that. [Laughs]
CA: Could you talk about the lead up to the planar transistor, how that came about?
JL: The first device we made was an NPN…we could make contacts to it and it was a lot easier to make. We didn’t get into some of the horrible problems of boron diffusion. Jean Hoerni’s bailiwick was working on the boron diffusion. Jean had been a theoretical physicist and had done work with crystallography but he turned into a pretty good experimentalist working with diffusion. I mean, he had just a “charge full-steam-ahead and try it” [attitude]. When he wanted to find out the limits on boron diffusion, he just kept doing it until it blew up and behind one of the diffusion furnaces there…it was a concrete wall and every once in a while the furnace would blow up and the whole works would come shooting out against the wall so you could see a little hole in the wall where Jean’s tubes had blown out.
Something that’s hard to explain is that it looks like…and this happens the whole way across technology…it looks like it’s enormous insights that lead to a new invention but this is so much based on past inventions and looking at what is practical to make rather than the key technical thing.
When Bo Lojek wrote his book [“History of Semiconductor Engineering”], he asked me to write a little testimonial on the back and this was a quotation that I had written for Bo for his book. “You and I agree that while the world loves a hero, semiconductor progress depended on the efforts and ideas of a large number of people and that moving forward depended on contributions going back a few decades in some cases. Also, as is the case with most inventions, a number of people with access to the same pool of common knowledge were working independently at the same time to put it altogether and to make the necessary extensions to the existing technology and who realized that the time was right for society to accept the new concepts.” That says that nearly all technical progress is a group effort and always has been and that was certainly true at Fairchild and [there were] a lot of unsung heroes involved in all of these things.
CA: I’ve also read that Autonetics, the Minuteman contractor, was a key force in getting to the planar transistor. Could you talk about that?
JL: It was key. The transistors were very expensive, difficult to use…completely different design concepts than were needed with tube design and the use for transistors was completely military. The military needed small devices that could be used for airborne computers and they also had temperature constraints which meant that they had to use silicon rather than germanium and we were at the right place at the right time. We could make the transistors that were needed and so Autonetics came on for the Minuteman program which was a major project and they had very high reliability requirements, obviously, and we had separate production lines there and had sort of almost a division of Autonetics at one point for some of the things we were doing. And then we got into a problem with little metal particles bouncing around inside the package that were shorting out the devices. Finally we realized what that was but that was just the time that Jean made his first planar transistor.
From Jean’s widow I’ve gotten some source documents to figure out what was going on and what Jean was thinking about. We started [the company] in October. In December [1957] he had a long notebook entry discussing the planar transistor so he had the idea of it but the technology wasn’t ready and we were completely focused on making our first devices. He [Hoerni] was trying to make a PNP transistor whereas a group lead by Gordon was making an NPN but after a year and a half…let’s see, it would be early 1959, Jean went back and started thinking about the planar transistor again which was going to involve a fourth mask to delineate the base area and I made that mask for him. Jean wrote up a patent application for this and showed it to Bob Noyce. And Bob, after seeing this planar thing getting ready to come along, wrote down his ideas for an integrated circuit and I have that documented to the day. Jean had his idea, he talked to Bob and Bob a week later wrote down his integrated circuit thoughts.
With all of these things it wasn’t, as I said earlier, an enormous leap forward in imagination. You sit down for a few minutes and you could visualize these things. The key thing is what can we make? Every day we could come up with a dozen new great ideas of things we could do but the question was 1.) could we make them and 2.) would the world buy them? So we were focused a lot more than a lot of the venture capital firms are today that think the world is going to pay them for being bright and having a bright idea. We learned quickly in those days the world doesn’t work that way.
CA: I’ve read that you jury rigged the fourth mask for the planar transistor and that you also witnessed the first demonstration. Could you talk about that?
JL: Well, I just made the mask. We just made an outrigger for our three-step camera. It wasn’t a big problem to do it. The tolerances in those days were…we were talking a thousandth of an inch or something. Now you are putting half the world inside of what was our tolerances in those days. These were just minor things and Jean [Hoerni] made the transistor in a couple weeks. Talking to Gordon Moore, Gordon said there had been some work and some jury-rigged work done on planar thoughts earlier. That’s something I don’t remember or know much about but the first one that was a real planar device was the one Jean made. Jean worked by himself pretty much and did it all himself and just showed us an accomplished fact and we were just startled to see the improvement in the device. The classical thinking was you had to take the oxide off because all the impurities would collect at the interface and Jean felt the opposite that it might be protection if we just left it on there.
Bell Labs had been down that road obviously before and then they didn’t pursue it to any extent. Michael Riordan wrote an article a couple years ago about why Bell didn’t do all this stuff first because they had all the basic technology, all the basic ideas, but their focus was on building devices, mainly non-military, and building devices that were going to have widespread use in the telephone system. So a lot of great ideas just never got pursued there, even with all the stuff they were doing. Our focus was on making devices that worked and Jean’s planar device came just at the time when we were really getting in trouble with the flying particles and we realized that we finally had gotten the yield up to a reasonable level on our mesa devices. The mesa technology we had developed pretty well.
To step back a second, one of the things I worked on is how you define the mesa area. You had to put little dots and I figured a scheme where you could make an array of little tiny wax dots that would line up with the transistor base which could be used to etch the pattern. I had strange chemicals and all kinds of wax to work with but that was just an example of the sort of thing that we had to develop to make the first transistors. That technology would be replaced by the planar technology that in the long run would make it a lot easier to produce devices…but on the short term, the yields on the early planar devices was very low. Again, it was one of these things you had to do and so we did it. And, as you mentioned, Autonetics got involved in that. I forget the exact timing but the planar just took over the world.
CA: And the planar transistor was demonstrated on March 4, 1959, is that correct?
JL: I’ve been around that with Michael Riordan [co-author of “Crystal Fire”]. I think it was the next week, not that it changes history that much, but we had hired Ed Baldwin to be our general manager and he left with some of the people to start a competing company. The week after that was when Jean demonstrated the planar so Ed should have stuck around another week. He’s going into business to make a device that we were rapidly going to outmode. But the planar worked and we had complementary NPN and PNP devices which was a great source of income. Other people had a lot of trouble making the PNPs and we could make this as a matched pair which was useful for all kinds of circuit applications.
CA: I’m just curious. Where did the planar name come from?
JL: I don’t know. It wouldn’t surprise me if it was Tom Bay or somebody in the sales department but we came out with brochures on it rather quickly and announced it as a product. It took about a year to get it in production but it was a tough year to develop these into high-yield processes. This question of yield is a tricky one. When you are working with a 3 or 4 percent yield, you just have mountains of unusable stuff. Then your yield goes to 30 or 40 percent and all of a sudden you’ve got more devices than you know what to do with [laughs]. So developing this whole thing of production engineering and making reproducible devices was a big project and we had a lot of very good technical people who were working on that aspect of it.
CA: So the planar process quickly put a lot of other companies out of business, like Rheem [Ed Baldwin’s company] and Philco. They had mesa lines and suddenly they are obsolete.
JL: The interesting thing to me in that regard is that when there was a radically new technology, a new company came along and used it. Transitron and a few other companies were very strong in germanium transistors but didn’t make the transition to silicon. The companies that were making devices by alloying went out of business and they never caught up. It still surprises me why RCA and Philco weren’t the big shots in the new technology but it’s just the way the world works. If you can start with a clean slate, you could focus on what you could do. You don’t sit around and tell war stories about the old ways that you used to do stuff and you can see that Fairchild had the MOS technology but it took Bob and Gordon to go start Intel to focus on MOS. MOS technology in those early days was really a tough thing to deal with. The unknown problems were with surface states on the devices. And here you are trying to make a device that was based on those surface states and I remember saying at the time…it was in the early Teledyne days, we were talking about MOS and I said, “If they use MOS devices on an airplane, I am going to take a bus the rest of my life.” [Laughs] That was my feeling and the feeling of a lot of other people then.
CA: Jay, could you tell me when you first heard about TI’s integrated circuit and sort of what impact that had at Fairchild?
JL: This was early in ’59 that TI started talking about that—and TI had a terrible reputation for getting patents and suing the hell out of everybody else—so our first thought of this was we have to show the flag too. TI…the first things they were talking about were extremely elementary devices and calling them an integrated circuit was a stretch of the imagination. Really, it was individual devices—just a transistor and a resistor essentially—on a piece of germanium but it was something they were talking about. So I remember at Wescon in 1959, it would be August of 1959, Bob said, “Hey, we have to show the flag here.” He and I talked and I made a little device with four individual transistors in it and resistors from a pencil but at least it showed we were in that area. And at the time, Bob and I discussed this, I was at loose ends. I had done my part on making the first transistors and was looking ahead for new devices. I did some work on a parametric amplifier diode for a while, which looked like it was going to be a good device, but I was ready to do [work on integrated circuits] and was given the mandate to start a group to do this. So I went out and hired a group of people and started in.
The main problem we saw was we knew how to make planar devices but the problem was electrically isolating them. It turned later into big patent wars on this stuff and it’s interesting that the three key things you need [for an IC] were three separate patents by three separate people. Kilby [at TI] got the patent for putting various devices on one piece of material. Fairchild got the patent for interconnecting devices on the surface of the wafer from the planar device and Kurt Lehovec at Sprague got the patent for the diffused electrical isolation to isolate the devices. The isolation was the key problem we faced. The technology was not ready to do a diffused isolation. This would involve diffusing boron the whole way through the wafer which was going to be something like an 18-hour diffusion. We were with difficulty able to do a 15-minute diffusion with boron before the furnaces started sagging and getting soft so it was out of the question technically to do that.
The way the first devices were made was by taking the device, turning it, fastening it on to a plate with the operating side down, etch the whole way through the device until you came to the oxide on the back, which is only going to be a few wavelengths thick, and then fill it with some kind of a filler. So we actually developed a technique for making those devices and it involved a lot of technology. For example, how do you see the front from the back? So we developed infrared alignment devices…silicon was transparent in the infrared and we could see what was on the other side. I mean, these devices never would be reliable. As I look back on it, there’s no way but we did demonstrate the point that we could make integrated circuits this way. So we were proceeding down that line mainly with the efforts of Isy Haas and Lionel Kattner and some ideas on improving boron diffusion that had come from the pre-production engineering of some other materials. We said, “It’s worth a whack” and we went ahead and did these diffusions and the first devices came out.
The first device we made with the back-fill technology was made about May 1960 so we had devices coming along and made the first one with the diffused isolation in probably November, something like that. But as late as November, Gordon [Moore], who was head of R&D…I have a memo that he wrote and it said, “We are going to have to make a decision; are we going to use the diffused isolation or the back-and-fill isolation?” It still was an open question then. But the integrated circuit was not a big deal. Now it looks like we should all have been walking around in hushed tones and saying “My God, we are going to change the world.” It wasn’t that way at all. It was just another more or less research curiosity at the time.
First of all, the Bell Labs people went through [the same challenges]…and there was a famous thing that [Mervin] Kelly [of Bell Labs] said, which was “the tyranny of numbers.” When you try to make a lot of devices together, if you have a yield even as high as 50 percent, if you have a dozen devices on there, your yield goes to zero. He was wrong on that. We started demonstrating that…if we start making these arrays, what’s going to happen? We went through and said first of all yield is defining a device that meets a lot of criteria. The device we have on this integrated circuit only has to meet one criterion. It has to be matched. That’s the one thing the integrated circuit did. The devices are automatically matched because they are made together and had to be matched. Isy Haas took reject devices and started mocking up integrated circuits and they worked…and we also realized that yield was not a random thing. There would be blotches of good areas and bad areas on a device and all we had to do was find a bunch of good areas, so we said the tyranny of numbers just doesn’t apply here. There’s going to be a low yield but we are going to get good devices and we started proving…that yes indeed we can make integrated circuits. There was no difference in our first problems of making our first transistor. The yields were horrible and you just by brute force figure out why are the yields so low and gradually clean up the process. We weren’t terribly clean in those days. We realized that we were going to have to do a better job than we had done. Bob Noyce was a heavy smoker and he’d come around the lab smoking all the time.
So we went ahead and by the end of the year Bob Norman worked on the DCTL ideas which involved transistors and resistors, both of which we can make easily. So then we ended up with an IC and then we made the first one that had four transistors and quickly made the whole family. And I left Fairchild shortly after that, and Lionel Kattner over the next six months got the whole family into production so by the end of 1961 it was an established Fairchild product. The only problem we faced was the world couldn’t have cared less about the integrated circuit at that time. Transistors were specified by circuit designers who put them all under their own circuits and the last thing they wanted was somebody to sell them the complete circuit to put them out of business which was the big reason that the sales department was not keen about it. Also, it was a lot more expensive way to do things. So the only use for integrated circuits was military applications where small size was the key [requirement] and it took several years before the first inkling that this could potentially be a cheaper way of doing it. And Gordon Moore told me that when he came up with what’s now called “Moore’s Law”, he said “this was just strictly a sales tool. I was just trying to point out to people that we’re greatly increasing our technology and our ability to do these things and this is going to be a cheaper way of doing it.” But it took a long time.
CA: Jay, could you talk about the key people who were involved with the Fairchild IC, besides yourself?
JL: The key people were Lionel Kattner who had come from TI and was the key diffusion person involved with this. He and I laid out the first circuits together so he was my key person. Jim Nall was involved with improving the step-and-repeat camera processes and working on this infrared device for the line up. Sam Fok was involved when we were making the isolation by mechanical means. He was involved with finding good waxes and methods of making a device that way. Art Enquall was involved with work on improving the photo resist processes. And I’m drawing a blank on a few of the other names. This was my group that was working on actually fabricating the devices. I mentioned Isy Haas. Isy was an electrical engineer and was involved in looking at the device parameters and in addition got involved in the diffusion, so he and Lionel together and independently were working on the diffusions that were necessary to do this. There was a whole parallel group under Vic Grinich which was device application…which was Bob Norman and Don Farina and a big group of people there. My group was building them and they were testing them and seeing how they would work and what needed to be done.
CA: Talking about the equipment you used, did you buy any of that from outside or everything was built inside?
JL: The diffusion furnace part of it was the key item…Art Lasch was involved at Fairchild making furnaces for us and he went off with our blessing and started his own company, Electroglas, getting into the diffusion equipment business. We could, of course, buy other sorts of equipment [such as] metal evaporators. Test equipment was another whole area where we had to build the equipment and we got so good at that that Fairchild set up a separate test equipment division.
I remember…we needed stereo microscopes for a production line and I went to Bausch and Lomb looking at the ideal microscope for us and I said, “We are going to need a few dozen of these,” and they said, “We only make six of them a year.” So that was the state of the technology then. We needed all sorts of things setting up a facility…backing up a bit. We were starting to work with some pretty nasty chemicals—hydrofluoric acid and things like that, and trichloroethylene. Some of these were pretty nasty customers and we had no knowledge of that and how bad these things can be and what do you do with the waste products? We were going from things that the chemical companies were used to selling just a tiny little bottle of and we wanted a car load lot of it. So we had an awful lot of stuff to do to learn how to handle these really nasty things…and had them on a production line with people who didn’t really appreciate how bad this stuff could be—so we had to put in some pretty severe work rules that people resented.
So the whole way along [there was] the development of all kinds of new technology. And in general it was just scaling up what existed instead of one little thing. We want hundreds of thousands of them. Charlie Sporck developed [the ability] to make reproducible things and here the learning curve really worked in your favor…that these things started at a low yield. The yield would increase. I know the first order that Fairchild took from Detroit was an order for a vast number of transistors at a fraction of the cost that it took us to make them and Charlie said, “We will be making large numbers. The costs will come down,” which proved to be the case.
CA: Why don’t we move on to your departure from Fairchild. Could you talk about the events that led to that and what your thinking was?
JL: My thinking there was we had made the first integrated circuit. For reasons that are pretty clear to me now…this really didn’t fit into the Fairchild sales programs. Fairchild was building a very successful company making transistors and diodes. Planar diodes were really a great business for Fairchild and that was, again, Jean Hoerni’s thinking. So obviously you should be focused on what you are good at and here the integrated circuit was a side thing…1.) you couldn’t make it very well and 2.) nobody wanted it except for specific military programs. It turned out the thing that made the integrated circuit take off once again was a second Minuteman program based on integrated circuits instead of the transistor. That happened several years down the road.
But the character of Fairchild had changed. When we started it was eight of us together. We each owned 10 percent of the company or some number roughly like that. We were all equals working together as a team. As the company got big and more stratified…first of all Fairchild [Camera and Instrument] exercised their option to buy the company from us. That happened way before we ever thought it would. I mean, it was just a couple of years and so I realized, as did a number of other people, the company was getting stratified and I was an employee in a big company and so the group spirit was going away. Bob Noyce ended up running the place. Gordon was running the R&D and they were doing good jobs at that but it was not the way it had been. When Fairchild owned it…some of the Fairchild management in the east who were running the company obviously could do what they wanted to do with it. So I was missing the excitement that I had when we started.
I also felt that integrated circuits were going to be a major thing and whether they were or not, I wanted to work on them. I wanted to work with somebody that really wanted integrated circuits rather than in the Fairchild case [where] it was going to be a distraction and would not be popular with the marketing people. So I met Henry Singleton who was starting Teledyne and he wanted to build a company based on integrated circuits for advanced military systems, so this was just what I was looking for. Jean Hoerni and I were very close friends. We were mountain climbers together and Jean, more than I was, was feeling the sense that he wasn’t part of a group any more and so we met Singleton and decided to go into business. And Sheldon Roberts and Gene Kleiner were both feeling disaffected the same way that I was and Jean was, so Sheldon joined us. Kleiner wanted to go off and do his own thing but he came in for the first six months or so and helped set up all our business practices and things like that. So here was half the original group left to start this and I could see here that I was in an environment where integrated circuits were going to be an essential part of the company’s growth.
Together with optics engineer, Bob Lewis, we made a step-and-repeat camera, an optics set up that was just order of magnitude better than what Fairchild had—so we could make extremely tight tolerance devices and make very sophisticated devices with our step-and-repeat cameras. The big thing we missed out on was epitaxy that was coming along. We just never got into that at the right time.
CA: So as you say, four of the original founders left…I guess Bob Noyce and Gordon must have not been happy.
JL: They weren’t happy about it. It didn’t help the morale in the company to see us all leave. We were reasonably well liked there. But I kept good relations with Fairchild…it would have been suicide to compete directly head-on with Fairchild. Signetics learned to their horror later on that competing with this big gorilla was a disaster. So what we did…our original mandate was to make very sophisticated devices that could be used for the Teledyne Systems Company to make very sophisticated military equipment. Henry Singleton had come from running a big division at Litton where there were inertial guidance systems and all these aspects. So here I could work with the systems guys and we made a lot of very sophisticated products. Unfortunately, this stuff was all classified and the records just don’t exist. And also when the space programs came along, Teledyne was very heavy in all aspects of that. We had a number of things on the first moon shot we made. We [Amelco] had the doppler device that told you how close you were to the moon’s surface and all sorts of stuff.
CA: Just backing up a bit, the name Amelco, is there any particular story behind that?
JL: Yes. Singleton and George Kozmetsky started in business. They wanted to build a big conglomerate and I asked Henry was he trying to build another Litton? He said, “Hell no. I’m going to build another GE.” So that was his thinking…our division was the only inside technical thing that they were developing. Everything else was by acquisition and one of the companies they bought was sort of a run down job shop in L.A. called Amelco that was making job shop military things. They had a big tax loss and Kozmetsky said, “You name your company Amelco and then we’ll be able to use these various tax [write offs]…” If I was picking a name, then Amelco probably would not have been the one I would have picked but we went ahead on that basis.
CA: What role did Arthur Rock play in this when you left and set up the company?
JL: Art was at that time on the board of directors of Teledyne and he had come to me as early as August of that year. And he said, “The next time you are in Los Angeles you ought to go talk to Singleton. He’s quite an impressive guy.” And I never did so it was just before Christmas [1960] when the Hayden Stone people, I think it was either Art or Bud Coyle, called me and said, “Hey, Henry Singleton’s at the other end of the line down there waiting for your call. At least call him.” So I called him and Jean Hoerni and I went down there New Year’s Eve and started talking to him and we very quickly said, “This fits our plans. We’ve got somebody that really likes the sort of things we want to do.” So we joined and Henry wanted to put it in L.A. and I said, “No way. There’s the infrastructure in Silicon Valley,” or what is now called Silicon Valley. It’s so important that that’s non-negotiable. We have to do it in the Bay Area. He said OK, so we started out.
The problem with Teledyne, it was under financed. Henry was working at the limit buying these companies…he was helping as much as he could but we were always under financial strain. I remember one day I was having trouble meeting payrolls and I just got on the plane and flew down to Henry and said, “Look, I need $100,000 right away. I’ve just had it.” And he turned to Betty, his secretary, and mumbled something and he came back and said, “Here’s a check for $60,000, not $100,000. I’m giving you $60,000 because that’s all the money there is in this whole damn company.” [Laughs] So that was the shoestring we were doing it on. That came later on to haunt us because we didn’t have the resources to build the mass of low-cost production lines, not that that was my temperament anyhow, but we also had the role then of supplying devices for the systems and also being self-supporting by making products. Jean was very fond of field effect devices and we had a good business there so we had both internal and external sales…which were sometimes a little hard to see which one was going to get the priority.
CA: I’ve read that because of this financial situation you went and bought equipment from outside, for example, diffusion furnaces from Electroglas? You would have preferred to build your own?
JL: Oh, no. That was always the way the whole way through. If you can buy it, buy it. I built the step-and-repeat [camera] because there was no commercial supplier for anything like that. I had a lens system that was on a bed 20 feet long with the lens about four feet in diameter. I found somebody to build this lens track for us. I said, “What are your qualifications?” And the guy said, “I developed equipment to put asparagus in a can. If you can do that, you can do anything.” [Laughs] And it proved to be right. But then anything you can…you buy. Life is too short. And diffusion technology was moving pretty fast…and as the years passed, it got very specialized and sophisticated and a lot of the big [equipment] companies developed out of that. Lead bonding equipment and all that sort of stuff was a real pain to try to build. You could start buying this stuff. And companies like Tektronix with specialized oscilloscopes, you could buy all that stuff.
CA: You talked earlier about not wanting to compete head-on with Fairchild because that would be suicide, but when Fairchild dramatically cut the cost of the IC, I know that had a pretty bad impact on Signetics, but how did that affect you [at Amelco]?
JL: Didn’t affect us at all. The products we were making…our external market was not something that was competing. I mean, Signetics and Fairchild were just head-to-head on a circuit that was a lot more useable than the DCTL we were making. I was always intrigued with linear circuits rather than digital and one big step forward we made at Amelco was making very sophisticated operational amplifiers. That was a business that Fairchild got into a little later and that was one place where we did meet head on. But with that digital technology there was an awful lot of very specialized things we were building for various space and military programs. We were building in small quantities which is no way to run a business in the long run…it’s more supported research rather than mass production.
Another thing we did at Amelco was develop a way of taking bits and pieces, little circuit pieces and building arrays, getting a lot of stuff packed in a small volume. We developed a lot of technology for that which was of great interest for military systems and it turned out to be…long term, this division is still in existence 50 years later—cranking this stuff out and it’s now focused mainly on medical equipment. So we did a lot of things connected with very sophisticated packaging. I remember building a little EKG device for the astronauts to wear in a centrifuge…we had to build an EKG with a transmitter on it and that’s an anecdote that I laugh at when I think back on it. We were just outside the FM band to transmit this and we pulled it down into the FM band so we could test it…we were picking up a country and western music station and this was a Sunday afternoon I remember. The first song that came out over this thing when we turned it on was “Nobody Knows the Troubles I’ve Seen.” So I still laugh when I think of that. So we were building a lot of device arrays.
At Fairchild, we never took any military contracts. Amelco was just the other way around so I made that choice. I remember at Fairchild some of the military people coming and saying, “Why don’t you ever want to take our contracts?” And I said, “We need the freedom and flexibility to develop our own products that we are willing to pay for.” And I looked at Pacific Semiconductors…that was the company that really scared me as far as the technology they had. They were supported by military contracts making very specific transistors and I thought in a big system there is going to be one of their transistors and there’s going to be 50,000 of ours and which one do you work on? So at Fairchild we were successful in a hurry, especially with the PNP transistors. We didn’t have competition and we could charge a lot for them so we had a lot of money floating around. We had the luxury, for example, of supporting the integrated circuit program at the time I was doing it. So finances in those early days were not the problem. That [problem] developed later on when Fairchild got into both managerial problems and problems of competition.
CA: Probably just to finish the story, how and why did you end up leaving Amelco?
JL: Jean left after a couple of years. We were both vice presidents of Teledyne so we could do pretty much what we wanted but Jean was having some difficulties with Kozmetsky. After a couple of years he wanted to leave, and not on terribly good terms, and start something else. That was just his nature. So we got Jim Battey in as general manager and I was running the company from the technical point-of-view…Teledyne was growing very rapidly during this period by acquisition. When we started, it was just Amelco and one other little division. When I left there were about 150 divisions. I took an interest in looking at how this whole thing was developing. With my physics background, I could pretty quickly understand what these people were doing in most cases and got intrigued with this big assembly of companies that was developing and started writing to each of the managers of the companies and trying to make some sense out of where this was going. George Roberts, who came in as president of Teledyne, was intrigued with the way I was approaching this and said, “Why don’t you come down to L.A. and be a vice president for technology and just do that full time?” which I did…I was there a total of 12 years, I think. By then I was in my mid-40s and I said, “Life is just too comfortable for me here now. I can do what I want to do. I have a plane that flies me around. I’m too young not to have any more challenges. I’m on good terms with everybody here. I like them. They like me. This is the time to quit and go off and do something else with my life.” So I just left and for the next year I thought that was about the dumbest thing anybody ever did. [Laughs] The shock of no longer having a plane to fly you around and not having a [company] credit card…this happens to a lot of people who change careers but that’s something I did and after I settled down, I focused on other things of interest in the rest of my life.
Those first years and looking back at how compressed things were…making the first transistor in under a year and turning into the big technical company in the business so quickly within a year or two. We weren’t volume leaders but we were the technical leaders. And we had one big advantage then. Every good engineer in the world wanted to work for us so hiring people was no problem. We were hiring at an enormous rate and outran our space requirements. We had the one building on Charleston Road, another building behind it and the one across the street. Ed Baldwin, when he came in, went to the Fairchild management and said, “Look, you have to build a big building fast.” We had not sold anything at that point and that gave us the momentum and the Fairchild management went along and built this building and I was hiring engineers.
I spent every evening it seemed going to San Francisco taking [prospective employees] out to fancy restaurants. There was never any problem persuading engineers to join us. It was a problem of their family and resettling their family so the problems that arose were along that line and I remember hiring some very top-level guy and he came to work and I took him into the annex in the back and pointed to a desk and said, “You are sharing this desk with five other people. This is your desk drawer.” [Laughs] That’s the way we were expanding in those days. We were just at the right time for making a product that the world needed and we had the right technology and what technology we didn’t have we developed, so we were able to move the transistor world along quite a bit.
Gordon [Moore] and I have speculated and given some talks on what would have happened if the group of us hadn’t started Fairchild. How fast would things have moved? The fact that, as I mentioned earlier, this was a universal thing…there were a lot of people working on a lot of stuff. It would have happened. I’m not positive that it would have happened in the Silicon Valley. It may have been centered more around Texas or somewhere on the East Coast. But when Fairchild got started, it proved to be a pretty tough competitor.
--Jay Last was interviewed by Craig Addison of SEMI on September 15, 2007 | <urn:uuid:bed17fe5-22f9-4e7b-ad58-5047d931ec2b> | 2 | 1.640625 | 0.034705 | en | 0.991292 | http://semi.org/en/About/SEMIGlobalUpdate/Articles/P042813 |
Take the 2-minute tour ×
When backing up with rsync, How do I keep the full directory structure?
For example, the remote server is saturn, and I want to backup saturn's /home/udi/files/pictures to a local directory named backup.
I want to have (locally) backup/home/udi/files/pictures rather than backup/pictures.
Any help?
share|improve this question
2 Answers 2
up vote 28 down vote accepted
Use the -R or --relative option to preserve the full path.
share|improve this answer
With the Cygwin Windows rsync, and assuming the remote rsync is pointing to the root, I'd do:
rsync -vtrz --delete server::rsyncid/home/udi/files/pictures /cygdrive/d/backup/home/udi/files
That will put the contents of the remote pictures directory in /backup/home/udi/files/pictures. Presumably the syntax under unix would be similar.
share|improve this answer
It is not useful because we need to reproduce the whole directory hierarchy on the local side before executing the rsync command. – Ludovic Kuty Apr 2 '12 at 11:42
Your Answer
| <urn:uuid:a402888b-ca15-4d46-a960-e0ada9361293> | 2 | 1.742188 | 0.765731 | en | 0.790206 | http://serverfault.com/questions/39522/how-to-keep-the-full-path-with-rsync/39529 |
Take the 2-minute tour ×
I recently migrated to a new host, a VPS solution. From day one, I started getting WHM/cPanel notifications of brute force attack attempts via root on the main account, 3-4 times per day. I know this is more and more typical in general, but...
My question is whether or not it's typical and/or something to be concerned about when it happens on a brand new server?
Note: I'm not asking how to defend against brute force attacks (e.g., using strong passwords and possibly removing ssh access by password authentication).
share|improve this question
5 Answers 5
up vote 1 down vote accepted
If a server's IP is accessible to the internet, it'll see attacks. Worms etc. crawl the publicly available IP space for victims, and on a VPS host there's a good chance your IP was another known server until recently.
Installing fail2ban or denyhosts to block brute force attempts is a pretty common step.
share|improve this answer
Thanks. I've got cPHulk running (docs.cpanel.net/twiki/bin/view/11_30/WHMDocs/CPHulk), which I assume is similar to denyhosts. – technoTarek Apr 29 '13 at 15:30
Looks like it, yep. – ceejayoz Apr 29 '13 at 15:38
Yes. This sort of thing is just part of the "background noise" of having an internet-connected system.
Disable root login via ssh and turn off password authentication in your sshd_config (using key auth instead), and you should be sufficiently safe from brute force attacks.
share|improve this answer
Yes, it's typical. Basically any system in the internet is constantly under some kind of attack. Usually, it's just considered background noise unless you have a large number of attempts.
share|improve this answer
As your server is cpanel server and by default it provides Brute Force Protection. Have you enabled "cPHulk" on your server, it protect your server from brute force. For more details you can read this http://docs.cpanel.net/twiki/bin/view/AllDocumentation/WHMDocs/CPHulk
share|improve this answer
He posted that he's running that in the comments. – ceejayoz Apr 29 '13 at 16:53
Lots of software calls home with the IP Address then people get their hands on the information and try and hack into the servers. This is fairly common I would contact your hosting provider and ask for assistance.
share|improve this answer
This has nothing to do with it. While this may be true for desktop/workstation software, there is very little server software that "phones home". As mentioned in my answer, traffic like this is just part of being on the internet. – EEAA Apr 30 '13 at 0:00
Your Answer
| <urn:uuid:e87b9016-9d12-479d-b349-1e73133d906b> | 2 | 1.617188 | 0.659634 | en | 0.916879 | http://serverfault.com/questions/503596/is-it-typical-to-get-brute-force-attack-attempts-on-a-brand-new-server |
HistorySpin: The Year of the Four
BlairWalshProject drives a hard bargain, but after a palette of Fernet and a very strange evening in San Francisco, he agreed to let me borrow the HistorySpin keys for a jaunt. What are we learning about today, you ask? We're learning about the Gregorian calendar's year of 68/69 CE: The Year of the Four Emperors.
This story actually begins in 55 CE, when Nero—then a wry 18 year old—assumed the Emperorship after Claudius was fed poisoned mushrooms by his beautiful wife, Messalina. Always known for a dark sort of wit, Nero forever after referred to the dish as "food of the gods." Delicious.
Anyway, in 60 CE—after what scholars refer to as the "quinquennium"—Nero had something of an existential/political crisis and decided to have his mother, Agrippina, killed. Now, I know what you're thinking, that's terrible, right? Well, yes, it is terrible, and it's even more terrible in the way that it went down: while Agrippina was vacationing in the arch of Italy's boot, Nero contrived to have her pleasure cruise more or less collapse around her. But did she die there and then? Hell no! She swam ashore to the applause and tears of those on the beach. Nero, meanwhile, is, like, totally freaking out and has one of his slaves just go ahead and stab her. Apparently, when confronted with the stiletto, Agrippina pointed at her stomach and said something like, "Here! Stab me here! Where that little fuck came out of me!"
Needless to say, Nero's advisors at the time, Burrus and Seneca,[1] had had enough of their charge and wisely retired from public life. In their place, Nero appointed Tigillinus, who was a very bad man. Were you dumb enough to profess your Christianity back then, Tigillinus would—in the words of Juvenal—use you as a street lamp (you'd be crucified and burned)!
In 65 CE, Nero got wind of a conspiracy against his life. The so-called Pisonian Conspiracy (Calpurnius Piso was its leader) planned to kill Nero at a spring festival in Baiae. Without getting into too much detail, Nero had some 100 patrician and equestrian rank men and women put to death. Most went the route of suicide, happily taking a bath, opening their veins, and drinking themselves to sleep.
Less than three years later, in June of 68 CE, Nero was assassinated by his own Imperial guard. He attempted escape from the city (some say while wearing women's clothes), but they caught up to him, where he quoted Homer ("I hear the rumbling of horse hooves") and finally bled out after multiple stabbings ("Like an artist, I die").
Galba, or how to revolt your way to the throne
HistorySpin: The Year of the Four
The background noise to Nero's demise involves a series of revolts in Gaul (modern France) and the Rhine river valley on the part of disgruntled Roman soldiers and their commanders. In the spring of 68 CE, mere months before Nero's death, Galba was governor of the northeastern sector of Spain. His generalissimo there, a certain Vindex, rebelled against Nero's rather severe taxation policy, only to be squashed by Lucius Verginius Rufus, whose elite VII Gemina legion just happened to be in the neighborhood and looking for a fight.
With Romans killing Romans in the north, the head of Nero's Praetorian Guard, Nymphidius Sabinus (great name) convinced his men not only to hunt down Nero—who had lost all political clout in the city—but transfer their loyalties to Galba, who had proven himself a capable, if draconian, provincial governor in Spain. Thus, on 9 June 68 CE, Galba was officially recognized as emperor and made a desultory return to Rome, but not before levying stringent fines on those provincials who didn't recognize his auctoritas.
You've got to hand it to Galba: he stayed in power for about seven months. In that time, however, he refused to pay Nymphidius' Praetorian Guards the money promised to them for supporting Galba in the first place. Moreover, Verginius Rufus' Gemina legionnaires were declared public enemies for obstructing Galba's ascension to the Emperorship (even though at the time they were merely keeping order under Nero). Funny how leadership changes can retroactively write and interpret laws.
On 1 January 69 CE, Rufus' German legions named Aulus Vitellius (we'll come back to Vitellius) their emperor, not least because he had taken over for Rufus as governor/commandant for Germania Inferior. At the same time, Marcus Salvius Otho, feeling slighted that he hadn't been named Galba's successor, bribed an already mercenary Praetorian Guard at Rome into his allegiance and protection. Dismayed at these parallel political blows, on 15 January 69 CE, Galba rushed into the Forum in an attempt to restore some semblance of power and peace. He was immediately mobbed, beaten to death, and beheaded.
Otho, or how to fall backwards into the throne
HistorySpin: The Year of the Four
With Galba's corpse being bandied about the city, what was a power vacuum turned into a super-vacuum. Otho was named Emperor by the Senate, while Vitellius had been named Emperor by his own men. This sort of double timing popped up at various points in Roman history and almost always ended in blood. Much blood.
Otho was by nature a cagey and penurious man. In the interest of peace, he sent emissaries to Vitellius, our ersatz Imperator in the north. Vitellius, however, had already dispatched his elite XXI Rapax legion to forcefully depose Otho. When no accord could be struck, then, Otho marched out from Rome and had his ass handed to him at the "battle" of Bedriacum, near modern Cremona, Italy.
Otho held the Emperorship for little more than three months. After a sound defeat at Bedriacum, he took his own life on 16 April 69 CE. At the close of his chapter on Otho, the biographer Suetonius gives us the following personal information:
He is said to have been of moderate height, splay-footed and bandy-legged, but almost feminine in his personal care. He had the hair of his body plucked out, and because of the thinness of his locks wore a wig so carefully fashioned and fitted to his head that no one suspected it. Moreover they say that he used to shave every day and smear his face with moist bread, beginning the practice with the appearance of his first facial hair, so as to never have a beard.
Weird guy.
Vitellius, or a throne got by arms must sustain itself by arms
HistorySpin: The Year of the Four
On the day of Otho's suicide, the Senate named Vitellius the lawful Emperor of Rome. Things went downhill quickly. For starters, Vitellius, in his infinite jealousy, began inviting other Roman elites to the Imperial mansion on the pretense of dinner and power negotiations, but in reality had his guests assassinated. This isn't to say that actual banqueting didn't take place. On the contrary, it did! And to such an extent that Vitellius drained Rome's coffers in a matter of months. Soon, creditors came looking, only to be done away with by the very man owing them money.
Perhaps the most fascinating aspect of Vitellius' reign, such as it was, concerns Roman wills and estate culture. You see, back then, to stay in good graces with one's Emperor, part of your estate was always left to the Prince upon your death. This originated as a form of patriotism and internal revenue. Vitellius, though, once he learned that an elite male had bequeathed part of his fortune to the Emperor, had that person immediately killed so as to collect his due. It's not hard to see how this might irk certain individuals.
Vespasian, or how to stop the blood-letting
HistorySpin: The Year of the Four
With Vitellius using Rome and her citizens as his personal checkbook, the summer of 69 CE wore on until 1 July, when Vespasian was named Emperor by his legions at Alexandria, Egypt. At the time, Vespasian was dealing with a pesky little group of people known as "the Jews," in Judaea, who had decided to cast off the shackles of their Roman oppressors and [gasp] fight back. After cobbling together an army from forces in Judaea and Syria, Vespasian sent his lieutenant, Marcus Antonius Primus, ahead to Rome with the announcement that he, Vespasian, would kindly like to take the throne from Vitellius.
In the meantime, Vespasian traveled back to Alexandria, where he took over the grain trafficking, thus cutting Rome off from over half of her bread income. This practice is a notoriously effective way to negotiate with irksome political enemies, and Vespasian's coup on this count is the one of the main reasons he was able to take power so quickly and decisively.
With Vespasian's man (Primus) knocking on the gates, Vitellius waffled and waffled until finally mustering the courage to march out against him. In a wonderful historical irony, Vitellius' army met their match at Bedriacum (military historians have creatively dubbed this battle "Bedriacum 2"), where they were crushed by Primus. As was often the case, the losing general got away with his life and went into hiding.
In a last gasp of desperation, Vitellius tried to bribe his way back into the good graces of powerful men, but the Roman elites were fed up. So fed up, in fact, that they had negotiated with Primus for Vitellius' retirement without his knowing. This was never to be, however, since as Vitellius was being led to the Imperial Palace—now Primus' base of operations while he waited for Vespasian to show up—he was assassinated by an overzealous Praetorian on 20 December 69 CE. Suetonius tells us that his body was thrown into the Tiber River; Cassius Dio says that he was decapitated and his head was carried around the city. They both agree that his sons were murdered.
On 21 December 69 CE, Vespasian ascended to the Emperorship of Rome. He ruled for just under ten years, before his son, Titus, took over for him, and after him, Domitian. Taken together, the three of them account for the Flavian Dynasty of Roman Emperors, who were responsible for building the Coliseum and decimating the Jewish population of the Levant.
[1] Well, not before Seneca wrote a letter to the Roman Senate explaining the reasons why Agrippina had to die. Would that we had that exemplar of political chicanery! | <urn:uuid:b165154c-c303-4550-a7c1-2fbea9efde19> | 2 | 2.125 | 0.215309 | en | 0.982934 | http://sidespin.kinja.com/historyspin-the-year-of-the-four-1585553098 |
Take the 2-minute tour ×
I have a WPF user control with a number of textboxes, this is hosted on a WPF window. The textboxes are not currently bound but I cannot type into any of them.
I have put a breakpoint in the KeyDown event of one of the textboxes and it hits it fine and I can see the key I pressed.
The textboxes are declared as
<TextBox Grid.Row="3"
Style="{StaticResource SearchTextBox}"
The style is implemented as
<Style x:Key="SearchTextBox"
TargetType="{x:Type TextBox}">
<Setter Property="Control.Margin" Value="2"/>
<Setter Property="Width" Value="140"/>
I am hoping I have overlooked something obvious.
EDIT: I only added the KeyDown and KeyUp events just to prove that the keys presses were getting through. I do not have any custom functionality.
share|improve this question
What else happens in the method "PostcodeSearch_KeyDown"? Just using your pure XAML text box and style it all works fine for me, so the only thing left is that something in that method is affecting the box. – Steve Fenton Jul 10 '09 at 7:02
Also, is there anything set on or within the UserControl that could be effecting the TextBoxes? – rmoore Jul 10 '09 at 15:20
1 Answer 1
up vote 1 down vote accepted
If your PostcodeSearch_KeyDown-method (or anybody else before the textbox in the event-chain, could also be some parent-control in the Preview_KeyDown-Event) sets e.Handled = true (e being the eventArgs), the event will not be propagated to further consumers (as the textbox) and thus no action will occur.
Another reason might be that your WPF-Window is hosted in a WinForms-Application, then you will need to call
To make keyboard interaction work (google/bing for WPF WinForms InterOp for a full explanation)
share|improve this answer
Your Answer
| <urn:uuid:1a19a132-7479-42dd-883c-9b425e06ce55> | 2 | 1.6875 | 0.075002 | en | 0.864939 | http://stackoverflow.com/questions/1108031/wpf-usercontrol-with-textboxes |
Take the 2-minute tour ×
Is there a fast way to clear the previous content of an MSXML2.DOMDocument object prior to reuse? I've been in the habit of discarding them and creating a fresh instance each time but this strikes me as wasteful and profiling a few test cases seems to confirm this.
I'm sticking with MSXML 3.0 in this case for portability, and I realize this older version has some quirks when it comes to using XPath to select large sets of nodes. Trying to select the whole document tree and then removing it doesn't feel clean and doesn't run as fast as I'd like. The "lazy selection" MSXML 3.0 uses doesn't inspire confidence either:
selectNodes Method
Previously, in MSXML 3.0 and earlier versions, the selection object created by calling the selectNodes method would gradually calculate the node-set. If the DOM tree was modified, while the selectNodes call was still actively iterating its contents, the behavior could potentially change the nodes that were selected or returned. In MSXML 4.0 and later, the node-set result is fully calculated at the time of selection. This ensures that the iteration is simple and predictable. In rare instances, this change might impact legacy code written to accommodate previous behavior.
I also realize that reusing such an object requires being mindful of the current settings of different properties (SelectionLanguage, etc.) that might linger between uses. I'd think that shouldn't be a big deal though, especially if the reusage always follows the same pattern.
I suppose what I'm after then is some clean and fast way to clear the loaded DOM to reuse it, or more input as to why reuse might be worse than the alternative of recreation.
share|improve this question
I'm no MSXML whiz, but have you tried calling Document->putref_documentElement (with a newly constructed, empty root element) or calling Document->load (with a pointer to a different XML source)? – reuben Sep 3 '09 at 8:12
Loading won't help becase I'm constructing the Document in code, but the other idea is worth trying. Thanks! Almost seems obvious, but maybe I haven't tried it. – Bob77 Sep 3 '09 at 11:06
Replacing the root element seems to do the trick. Too bad this wasn't suggested as an answer, I'd accept it. – Bob77 Sep 4 '09 at 14:08
1 Answer 1
up vote 1 down vote accepted
You may consider migrating to MSXML6:
1. First of all, MSXML6 is in-the-box with WinXP SP3, Vista, Windows Server 2008, Win7 and Windows Server 2008 R2. The only OS supported by Microsoft that doesn't have MSXML6 in band is Windows 2003, where you'll have to let customer to download the MSI. Overall, MSXML6 is almost as portable as MSXML3.
2. Unlike MSXML3 supporting both XSL Pattern and XPath, MSXML6 supports XPath only, where SelectNodes and SelectSingleNode only work in the context of snapshot.
3. Unlike GetElementsByTagName, a snapshot semantics is a defined by W3C. MSXML6 has better performance and W3C compliance.
Also, you shouldn't care too much about cleaning up the document after each use, as MSXML has Garbage Collection internally, meaning you'll not get the memory back when you replacing the document element. My advice is to have peace with a specific cleansing effort, just reuse the instance for the next load or rebuilding the tree with DOM API. If memory usage is really a big concern, XmlLite can give you full control.
share|improve this answer
Your point regarding MSXML 6 is well taken, however there are still lots of Win2K and Win9X systems in the wild among my target PC base. As far as "garbage collection" goes I think you are way wrong. MSXML DOM is COM-based, so cleanup is useful (and effective) there, unlike something created in .Net or Java. But if you have evidence to the contrary I'd appreciate a link. – Bob77 Aug 5 '11 at 20:43
Tried a quick benchmark, and there is no magic cleanup. However there is "lazy cleanup" because checking immediately reveals only part of the memory is reclaimed. To really clear the overhead means releasing the object entirely, not even .loadXML("") gets it all back. My point in cleaning up at all is that I may load and use a large document, then wait a long time (minutes or hours) before needing another one - or I might construct another within milliseconds. It all adds up when you're writing server side code that may have hundreds of clients. – Bob77 Aug 5 '11 at 22:01
Where you are now is what I am trying to help. MSXML is a COM with GC internally, as I've pasted the link, and that was a long story back to Visual J++. Anyway, I just provide the information to help you to choose the right solution in both platform and the strategy for cleanup. I don't think it worth a down vote. – Samuel Zhang Aug 6 '11 at 0:27
The downvote was re. MSXML 6. However I reconsidereed but by then it woud not let me undo the vote. If you do a minor edit it will let me retract the down vote. – Bob77 Aug 6 '11 at 0:56
Reading the linked article, I'd even be glad to upvote now. Looks like a great reason to avoid the MSXML DOM entirely, and may explain a number of other erratic performance problems we're seeing with it. I see their reasons, large object hierarchies are never cheap to clean up. – Bob77 Aug 6 '11 at 0:59
Your Answer
| <urn:uuid:ae3c5d02-14e2-435c-8b18-ef5f11db3278> | 2 | 1.8125 | 0.070124 | en | 0.945499 | http://stackoverflow.com/questions/1354286/fast-clear-msxml-document-or-re-create/6959132 |
Take the 2-minute tour ×
I'm new to Rails and I'm wondering if there is an option to change the default rails server, i.e., webrick, for another one such as 'puma' or 'thin'. I know it is possible to specify which server to run with 'rails server' command, however I would like to use this command without specify the name of the server so it can run the default rails server. Is there a way to change the default rails server into a configuration file or something like this? Thanks in advance for your help!
share|improve this question
5 Answers 5
up vote 4 down vote accepted
I think rails simply passes on the server option provided to rack. Rack has the following logic to determine what server to run:
def server
@_server ||= Rack::Handler.get(options[:server]) || Rack::Handler.default(options)
The first case is when a :server option was passed to the rails server command. The second is to determine the default. It looks like:
def self.default(options = {})
# Guess.
if ENV.include?("PHP_FCGI_CHILDREN")
# We already speak FastCGI
options.delete :File
options.delete :Port
elsif ENV.include?("REQUEST_METHOD")
Thin and Puma should be automatically picked up. The fallback is Webrick. Of course other web servers could override this behavior to make them the first in the chain.
If your Webserver is not picked up by default you could monkey-patch the default method to work like you want it. Of course this could break in future versions of rack.
share|improve this answer
Based on James Hebden's answer:
Add Puma to gemfile
# Gemfile
gem 'puma'
Bundle install it
Make it default, paste this code into script/rails above require 'rails/commands':
require 'rack/handler'
Rack::Handler::WEBrick = Rack::Handler.get(:puma)
So script/rails (in Rails 3.2.12) will look like:
#!/usr/bin/env ruby
require 'rack/handler'
require 'rails/commands'
Run server
rails s
=> Booting Puma
share|improve this answer
To avoid an error when the puma gem isn't installed, you can wrap Rack::Handler::WEBrick = Rack::Handler.get(:puma) with begin ... rescue LoadError end – Abe Voelker May 3 '13 at 13:59
Rack (the interface between rails and a web server) has handlers for the default WEBrick, and also for Thin. If you place the following in your Gemfile in the root of your rails project
gem 'thin'
rails server will automatically use Thin. This has been the case since 3.2rc2.
This unfortunately only applies to Thin, as Rack does not have built-in support for Unicorn, and others.
For servers that have Rack handlers (again, sadly Unicorn does not), you can do a bit of a hack to get rails server to use them. In your scripts/rails file in the root of your rails project, you can add the below just above `require 'rails/commands'
require 'rack/handler'
Rack::Handler::WEBrick = Rack::Handler::<name of handler class>
This essentially resets the handler for WEBrick to point to the handler for the server you would like to use.
To get an understanding of the supported Rack handlers, take a look at the comments in the source: https://github.com/rkh/rack/blob/master/lib/rack/handler.rb
share|improve this answer
Should be require rails/commands, not rack/commands. Can't edit your post, stupid StackOverflow says: Edits must be at least 6 characters; is there something else to improve in this post? :) – denis.peplin Feb 16 '13 at 15:42
Ah - thanks for spotting the typo! – James Hebden Feb 18 '13 at 8:58
If you want unicorn/thin/etc, just add the gem to your gemfile
i.e. gem 'unicorn', gem 'thin', etc. then run bundle install at the command line.
As far as I can tell, adding either of these gems runs the appropriate server via rails server
Apparently this only works for Thin or Puma.
share|improve this answer
when I add gem 'thin' to the gemfile it works, but for gem 'unicorn' it doesn't work because when I run the command rails server it starts Webrick instead of unicorn, for this reason I'm asking if there is another option. Thanks. – airin Jan 3 '13 at 21:47
@airin, from your RAILS_ROOT run unicorn_rails – sameera207 Jan 3 '13 at 22:02
I wouldn't get hung up on specifically using the rails server command. Just install whichever gem you want and alias the command (e.g. rails s Puma) to something simple like rs.
share|improve this answer
Your Answer
| <urn:uuid:23140026-dfaf-4033-91c9-f1afbbd6e4ec> | 2 | 2.078125 | 0.205116 | en | 0.828842 | http://stackoverflow.com/questions/14146700/how-to-change-the-default-rails-server-in-rails-3/14146917 |
Take the 2-minute tour ×
Given a QGraphicsScene, or QGraphicsView, is it possible to create an image file (preferably PNG or JPG)? If yes, how?
share|improve this question
3 Answers 3
up vote 15 down vote accepted
I have not tried this, but this is the idea of how to do it.
You can do this in several ways One form is as follows:
QGraphicsView* view = new QGraphicsView(scene,this);
QString fileName = "file_name.png";
QPixmap pixMap = QPixmap::grabWidget(view);
//Uses Qpixmap::grabWidget function to create a pixmap and paints the QGraphicsView inside it.
The other is to use the render function QGraphicsScene::render():
QImage image(fn);
QPainter painter(&image);
share|improve this answer
awesome! thanks. i tried the second approach. the only thing required is that the QImage needs to be initialized. – Donotalo Sep 17 '11 at 13:42
After just dealing with this problem, there's enough improvement here to warrant a new answer:
scene->clearSelection(); // Selections would also render to the file
scene->setSceneRect(scene->itemsBoundingRect()); // Re-shrink the scene to it's bounding contents
QPainter painter(&image);
share|improve this answer
grabWidget is deprecated, use grab. And you can use a QFileDialog
QString fileName= QFileDialog::getSaveFileName(this, "Save image", QCoreApplication::applicationDirPath(), "BMP Files (*.bmp);;JPEG (*.JPEG);;PNG (*.png)" );
if (!fileName.isNull())
QPixmap pixMap = this->ui->graphicsView->grab();
share|improve this answer
Your Answer
| <urn:uuid:3e81256c-8012-47f5-81fc-0e91011fe043> | 2 | 1.664063 | 0.991479 | en | 0.683582 | http://stackoverflow.com/questions/7451183/how-to-create-image-file-from-qgraphicsscene-qgraphicsview |
Take the 2-minute tour ×
I am a Linux only person, but bought a laptop with Windows 8 for gaming. The Samsung Series 5 UltraBook came with a lot of crapware installed. I would like to remove some of the software, but for this I need to have Administrator rights.
So I followed about 3-4 online tutorials, but I get an error. I used the
net user administrator/active:yes
command but I get the error
system error 5
as response.
Then I followed a different tutorial via the control panel "Manage ...". In the last step the tutorial said to click on "Groups and local accounts" but my screen didn't show this folder.
Note: I am NOT using an guest account, as far as I can tell.
UPDATE: I tried all the options in the following page, and all failed:
The error is "Local User and Groups. This snapin may not be used with this edition of Winodws 8. To manage user accounts for this computer, use the User Acounts tools in the Control Panel. The method via sec-policy fails as well.
WARNING! I just tested windows8 and it is really that bad. I am a CS PhD student, so I though it's the users. But it is really windows8. Avoid windows8 at all costs! I had to spend half a day, to boot my computer safe mode. If you are greeted with a blank back screen, the only option to boot into safe mode is to create a bootable DVD or USB stick. (Or you might be lucky, and your BIOS is MS certified, then it's easier. In windows8 the boot menu is disabled, you just can't press a key to get into safe mode. This could have been better handled, but they don't care. Solution: use windows7 or stick with ubuntu.
share|improve this question
The default account is always the administrator account – pratnala Dec 22 '12 at 6:11
3 Answers 3
drag your mouse to the top right corner, in search enter Command Prompt then right-click right-click and run as administrator then this will be done.
share|improve this answer
Could it be you are missing the Password instruction?
Net user administrator P$sw0rdY
Net user administrator /active:yes
Also start by 'Run as Administrator'. Here are instructions I created to Activate the Windows 8 administrators account.
share|improve this answer
up vote 0 down vote accepted
Okay, I made the mistake of not running the command prompt with Administrator rights.
Use WindowsX.
Select "Command Prompt (Admin)"
Then the following command works:
net user administrator/active:yes
share|improve this answer
Your Answer
| <urn:uuid:ccd9a1d6-b79f-4a20-a49a-c2580c7a5a33> | 2 | 1.632813 | 0.030404 | en | 0.898345 | http://superuser.com/questions/522885/login-in-as-administrator-on-new-laptop?answertab=active |
Skip to content
WebMD Symptom Checker
Dizziness, Fatigue and Ringing in ears
WebMD Symptom Checker helps you find the most common medical conditions indicated by the symptoms dizziness, fatigue and ringing in ears including Meniere's disease, Medication reaction or side-effect, and Multiple sclerosis.
There are 83 conditions associated with dizziness, fatigue and ringing in ears. The links below will provide you with more detailed information on these medical conditions from the WebMD Symptom Checker and help provide a better understanding of causes and treatment of these related conditions. | <urn:uuid:d73fc063-3480-4b9b-9010-55940ea2b7fa> | 2 | 2.125 | 0.082247 | en | 0.91167 | http://symptomchecker.webmd.com/multiple-symptoms?symptoms=dizziness%7Cfatigue%7Cringing-in-ears&symptomids=81%7C98%7C193&locations=66%7C66%7C4 |
Write a News Story
Meet a Pioneer Pilot
Amelia Earhart Home
Back to Story List
Ameila Earhart AKA Fllier
By: Keondre H.
Virginia, Age 12
Ameila Earhart vanish on her last trip.She was trying to be the first women to go around the world ,but when she was on her trip her plane crash into the japanese seas. Ameila Earhart might have diedor she might got a little injured.If she were to survive the japanese might have found her and took her in as a prisnor.They might have ask her some quetion. Like why was she fling over there seas,was she a spy, or was she form america. The japanese might have killed her.Ameila Earhart was a true flier to. Thats my story on Amelia Earhart AKA Flier.
Back to Story List | <urn:uuid:2bc65a4c-e285-4072-9636-04dd90f2989a> | 2 | 2.21875 | 0.102954 | en | 0.975953 | http://teacher.scholastic.com/earhart/gazette/readrep.asp?Index=296&ID=429&Type=Sch |
Slashdot is powered by your submissions, so send in your scoop
Forgot your password?
Slashdot videos: Now with more Slashdot!
• View
• Discuss
• Share
Technology Science Hardware
New Solution For Your Transistor BBQ 191
Posted by timothy
from the but-the-ribs-aren't-done-yet dept.
servantsoldier writes "There's a new solution for the transistor heat problem: Make them out of charcoal... The AP is reporting that Japanese researchers, led by Daisuke Nakamura of Toyota Central R&D Laboratories Inc., have discovered a way to use silicon carbide instead of silicon in the creation of transistor wafers. The Japanese researchers discovered that they can build silicon carbide wafers by using a multiple-step process in which the crystals are grown in several stages. As a result, defects are minimized. Other benefits are decreased weight and a more rugged material. The researchers say that currently only a 3" wafer has been produced and that a marketable product is at least six years away."
New Solution For Your Transistor BBQ
Comments Filter:
• by Baka_kun (647710) on Thursday August 26, 2004 @01:31AM (#10075910) Journal
the text said "... that Japanese researchers, led by Daisuke Nakamura of Toyota Central R&D Laboratories Inc., ..."
but i read "...that Japanese researchers, led by Duke Nukem of Toyota Central R&D Laboratories Inc., ..."
other than this, Great, if this works in practice well be having new smaller cpus for everything.
but im still waiting for a pda without screen, that uses my glasses as a screen.. but thats more of scifi than reality.
• by chatgris (735079) on Thursday August 26, 2004 @01:32AM (#10075913) Homepage
This may be modded as funny.. But realistically, think about this.
The amount of heat being generated by chips does not seem to be decreasing at all, and this material appears to be produced to be "heat resistant" instead of more efficient.
How long until your PC puts out enough heat that it would be economical to re-use that heat for a hot water tank, or for winter heating?
How long until we need special 240V plugs like electric stoves have for power?
I think that emphasis on more efficient chips is a better venture than heat resistant materials, as the whole heat byproduct of CPU's seems to be sprialling out of control.
• by Anonymous Coward
Most countries have 240v to begin with.
• You can do that right now. My Athlon 1800+ keeps my room nicely warm in winter, and it's a (relatively) low consumption chip.
• A couple of those, connected via heatpipe to a hotplate at the top of the case, would make an excellent hot-plate for a coffee or tea pot =)
As for the plugs - well, there's some way to go yet. At the moment, power supplies are on the order of 5-600W. An electric heater can put out up to 3000 or so watts.
I used to run a constantly-on heater, two PCs, three monitors, some random home networking equipment and a desk lamp all off a series of four-way power bars connected through a single 13A 230V UK plug. The
• Oh, before anyone tries the stage-lighting thing: it *worked*, but the plug got pretty hot and eventually the circuit breakers tripped. The problem was solved by splitting the load over two plugs on opposite sides of the stage =P
Ah, the days of helping out with school stage tech. I still don't think the music dept. has forgiven me for blowing up two of their (old, crappy, faulty-but-not-diagnosed-until-they-failed) PA amps in one night...
• You're lucky. With our lowly 120V supplies here, 2000 Watts is about as much as you can ever expect on a single circuit. (theoretically 2400W on a 20A circuit, but once you're pulling close to 20A, the wires and cords themselves start to draw enough in heating that it adds up)
On the other hand, I have accidentally touched live AC wires a few times (and even stuck my finger in a light socket as a kid) and had relatively minor effects from it. I'd imagine 220/240 has a bit more of a kick... :)
- Peter
• well here in australia we already use 240V for everything.
and then we have three phase for serious stuff...
Oh and 16amp plugs for real servers...
hmmm well it was a nice idea.
• by spellraiser (764337) on Thursday August 26, 2004 @02:19AM (#10076049) Journal
Yes, but I still think water cooling [] is the way to go, personally.
• by Moraelin (679338) on Thursday August 26, 2004 @07:50AM (#10076952) Journal
Yes, silicon carbide and water cooling will get the heat out of the CPU faster.
The problem still remains that a metric buttload of heat is produced, and that it comes out of the electricity bill. Sometimes twice: in the summer you also pay for the air conditioning, since that shiny new CPU is heating the room some more.
I think it's getting ludicrious.
The Prescott is already over 100 W, and Intel apparently plans dual core versions. Whoppee for 200+ W CPUs. NVidia 6800 Ultras are rated for 120 W, and they're hyping SLI setups now. Yep, _two_ graphics cards, if just 120W worth of hot air blowing off the back of the case wasn't enough.
Add hard drives, motherboard, and the PSUs own inefficiency, and you're already looking at 1000W worth of heat for the whole computer. That's already like a space heater.
In fact, go ahead and turn a space heater on near your desk in the summer, and you've got a pretty good approximation of what the next generation of computers promises to be like. Now picture some 4 of them in the same room, at the office.
And it's raising exponentially. Carbide and water cooling will only help them get further along that curve.
And I'll be damned if I'm thrilled at the prospect.
This also brings the problem of even more fans. Even with water cooling, you then have to get the heat out of the water. It still means fans. More heat will just mean more fans, bigger fans, or faster fans. Or all the above.
And I'm not thrilled at the prospect of the return of the noisy computer either. I can jolly well do without the machine sounding like a jumbo jet. Especially when I'm watching a DVD or such, I can do without having to turn the volume sky high just to be able to hear what they're saying. And at the office I can do without four noisy hovercrafts in the same room.
• The amount of heat being generated by chips does not seem to be decreasing at all ...
I disagree. I've just upgraded an Athlon XP 1800+ system to an Athlon64 3500+.
The new box runs around 20 degrees C cooler than the old one at idle and under heavy load; both use the supplied retail AMD heatsinks. I'm not using "Cool 'n Quiet" on the '64; it might take a bit off the idle temperature, but I don't see the point.
• by Anonymous Coward
from the article:
Devices built with the rugged material would not require cooling and other protections that add size, weight and cost to traditional silicon electronics in power systems, jet engines, rockets, wireless transmitters and other equipment exposed to harsh environments.
So you see, besides that it is nearly as hard as diamond and can survive the temperatures of re-entry into the Earth's atmosphere, they want use it to replace silicon electronics that are used in more stressful environments. A
• This isn't exactly answering your post, but don't forget that there are other uses for silicon than processors. Think industrial power switching, high power drives.
• All we need is some way to convert heat directly into electricity.... dream on I guess.
• They do they are called thermo-couples and operate on the peltier effect.
Take two different wires twist them together into two junctions, break one wire put in a meter; then heat one junction, cool the other and electrical current flows. the peltier cooler work by adding current which causes one junction to warm, and the other to cool.
You should be able to take a peltier cooler, heat one side and cool the other and get some electricity out of it. I imagine the efficency is pathetic, but its just waste he
Heat resistance isn't the point -- current IC's don't melt, they get trashed via difusion processes that will still be there in SiC.
The advantage of SiC is substantially enhanced (2x) thermal conductivity vs. Si. This makes it easier to get heat out of the chip, allowing it to run cooler at any given heat production rate.
• The voltage doesn't matter; it's the wattage. So, you probably won't need more than 120V for future machines, but you may need better wiring so that more amps can be carried to it without blowing a fuse (or lighting your house on fire).
• Honestly, I do that as it is today! During the winter, I'm a cheap miser, and keep the rest of the house at about 50. I keep my computer in my room and always keep the door closed, and it'll reach a balmy 70 degrees just from the PC.
• Imagine ... (Score:1, Redundant)
by valmont (3573)
... a lighter and more rugged beowulf cluster of those.
• Charcoal? (Score:5, Insightful)
by mikeophile (647318) on Thursday August 26, 2004 @01:34AM (#10075925)
Think knife-sharpener.
Silicon carbide is really hard stuff.
It's not quite diamond, but with a hardness of 9.25, you could use your SiC processor to grind real axes and not just figurative ones in flamewars.
• you're right: born from a star [http]
• Re:Charcoal? (Score:5, Interesting)
by DarkMan (32280) on Thursday August 26, 2004 @07:47AM (#10076915) Journal
Not quite.
I've got a quitea bit of experience with SiC abrasives, what with the materials engineering and being a bit of a lapidary.
First off, it's nowhere near diamond in terms of hardness. The Mohs scale is semi-arbitary in assignement, and not even vaugely linear. On proper hardness scale (in this case Vickers), diamond has a hardness of around 90 GPa, compared to about 25 GPa for SiC. That's the reason I've got a box full of diamond abrasives, despite the cost (about 30 times more expensive), they are much faster, and last almost indefinitly. More later on this.
Secondly, SiC needs to be rough. If you don't belive me, try grinding a carrot into shape on a window. The glass is very much harder then the carrot, but is nearly perfectly smooth, and as such, the carrot just sides about. Compare with rubbing the carrot on something like a concrete paving slab, which grinds it much better. The reative hardnesses are wrong here, but show the need for surface roughness.
As an aside, if you think that paper cuts are bad from standard office paper, then try getting one from fine SiC abrasive paper. Stiffer paper, cuts deeper, and the abrasive roughs up one side of the cut, so it takes about four times as long to heal. It's a mistake I've made exactly once.
A processor is not a single pure material - if it was, it wouldn't do anything. They are a complex layered system, with layers of copper and SiO. Trying to grind anything with a processor die will just succed in scraping off all that important stuff. The hardness of SiO is Mohs 7, well below that of anything actually used as an abrasive for metals. (It's the same as ground glass, near enough, sometimes used for abrading wood or plastics).
For comparison silicon has a hardness of 12 GPa Vickers. SiC is only around twice as hard as that.
So, no, you can't really use it as an abrasive. If you really want to be very careful, you might be able to use the edge of the die as a scraper, but you'd probably just remove the important stuff.
That's alla moot point, however. I strongly supect that you'll never see the actuall die, it will be under a metal heat spreader. Because they can cope with higher temperatures [0], there is even less need to take the risk of mishandling breaking the die.
And lest you think that SiC would be less likely to break then silicon, I'm afraid not. Aside from the fact that many broken Athlons are due to the top few layers of SiO and metal breaking, SiC is not that tougher than silicon. As any lapidary will tell you, it's perfectly possible to chip saphire and diamond, if you're not careful.
Still, I can't deny that facts aside, it's a wonderfuly evocative metaphor.
[0] And how much higher! Silicon tops out at 350 C, SiC could operatate at 600 C, where is it glowing red hot! sourced from Nasa []
• The article is kind of vague on the details, for instance, just how much hotter are these semiconductors going to be able to run? Is it possible that chips made from these will have to use a non-plastic casing material? If so, that would be very cool. I doubt it though, that'd have to be pretty hot.
• In Japan... (Score:5, Funny)
by Johnny Fusion (658094) <{moc.liamg} {ta} {odnomnez}> on Thursday August 26, 2004 @01:36AM (#10075937) Homepage Journal
Hirohito: Oh! You must have very big wafer!
Owner: Excuse me?! I was just asking you what you're up to with this manufacturing process!
Nothing! We are very simple people with very small wafer! Mr. Hosek's wafer is especially small!
Hosek: He he he! So small!
Hirohito: We cannot achieve much with so small wafer! But, you Americans! Wow! Wafer so big! SO BIG Wafer!
Owner: Well, I-I guess it is a pretty good size
• by harlemjoe (304815) on Thursday August 26, 2004 @01:39AM (#10075946)
From the article....
In an advance that could lead to lighter spacecraft and smarter cars, researchers have developed a new technique for producing a high-quality computer chip that is much more resistant to extreme conditions than the silicon found in most of today's electronics.
So a chip more resistant to extreme conditions is also somehow 'lighter' and 'smarter'...
A good step forward for science, but not for science journalism...
• by gatesh8r (182908) on Thursday August 26, 2004 @01:45AM (#10075961)
Gives new meaning to "burning up your CPU". Better hope the non-techies never open up their machines...
• Your cpus will have a new use when obsolete...
• by teamhasnoi (554944) <(moc.oohay) (ta) (ionsahmaet)> on Thursday August 26, 2004 @02:02AM (#10076008) Homepage Journal
I'll be able to use these in my flexible paper display ebook with fuel cell technology as I drive to work in my hydrogen powered flying car!
I can't wait!
• I'm all for being able to OC the hell outa my proc and not be worried about burning it..
These CPUs would be far more durable and last a lot longer. Why is that a problem? Think about the last time your job/office/place of business replaced computers. You're gonna be stuck with that slow machine a whole lot longer.
• Re:a good idea? (Score:3, Insightful)
by jimicus (737525)
No you won't. Can you imagine Compaq, Dell or IBM voluntarily producing a PC which never wears out?
• Re:a good idea? (Score:3, Insightful)
by Y2K is bogus (7647)
So the major PC makers wouldn't want to make products that never fail and never require spare parts, except due to catastrophe?
Producing spares isn't their primary focus, and every RMA for stupid broken stuff is costly. A laptop that exceeds the 3 year warranty without breaking would be music to their ears, and consumers.
Your logic is flawed. It isn't "wearing out" that makes people buy new computers, it's the fact that it's too slow or old. Most computers end up surplused, just check the HUGE secondar
• Next step: diamond (Score:2, Interesting)
by CityZen (464761)
If you've got the carbon, why bother with the silicon? Actually, I wonder what they use to "dope" diamond semiconductors? 5 []
• ...or did that new supercomputer finally arrive??
• by ArcticCelt (660351) on Thursday August 26, 2004 @02:49AM (#10076112)
Steve Jobs when asked what's next for the iPod: []
"You know, our next big step is we want it to make toast," Jobs answered. "I want to brown my bagels when I'm listening to my music."
Damn Steve, again, he saw this charcoal technology coming before anybody. :)
• by Jason1729 (561790) on Thursday August 26, 2004 @02:51AM (#10076116)
Silicon carbide is a very hard, brittle material with a very high melting point commonly used to make crucibles and high speed saw blades and drill bits.
Comparing this to charcol is like saying that Carbon Monoxide is the same thing as Oxygen because CO contains oxygen.
• by panurge (573432) on Thursday August 26, 2004 @03:23AM (#10076181)
Silicon carbide and diamond both have significant potential use as power semiconductors. Forget CPUs, think I/O. Think smaller power supplies, smaller audio drivers, more rugged automotive systems, and, ultimately, being able to shrink robotics controllers as a next step to producing very small robots. If a robot's motors are running at 80C, you want the power semis to be able to handle that. Furthermore, a lot of possible fuel cell designs run at fairly high temperature and, again, you want the electronics to survive the environment without too much cooling.
There are also huge potential benefits for rad-hard communications satellites, where cooling is a major problem (radiation only.)
• If you google for silicon carbide transistors just about all the hits are for microwave and power applications.
• Growing the crystals in a multi-step process sounds like a very expensive process. Probably useful for somehot chips though.
So why the hell do we need hot chips anyway? ARM and MIPS devices run cool. Why does x86 have to be hot? Indeed why the hell are we still wedded to these power hungry devices?
• Not all silicon is used in processors.
• The main problem is getting the required purity, silicon based chips involve a multi-step process process to manufacture the substrate now. Basical they take very pure silica sand (SiO2), and purify it as much as possible chemicaly, Reduce it to remove the oxygen, melt it, then extract it by growing a single crystal. Then crystal of Si is then heated to just short of the melting point and then, moving it through a electric induction heater a small portion of the crystal melts, and any remaining impurities t
• "ARM and MIPS devices run cool. Why does x86 have to be hot?
Different markets. X86 is under extreme competitive pressure to produce the fastest possible processors in the medium price range. This means more complicated circuitry to produce the same function. (As a trivial example, compare a simple adder to a look-ahead-carry adder.) The complication adds heat.
• It makes a mockery of "Green PCs" though. In the last 18 years that I have had various PCs the power usage has gone up from ~100W to ~350W for the box. CRT monitor power has gone up too and only switching to an LCD has improved things.
A machine built with 8x ARM cores would have as much grunt as a P4, but cost less and would use only a fraction of the power.
• by Hank the Lion (47086) on Thursday August 26, 2004 @04:33AM (#10076339) Journal
It's very nice that SiC can withstand high temperatures and is very hard, but are these the most important features of a semiconductor material?
I would be more interested in band gap voltage, electron/hole mobility etc.
Who needs a chip that can run hot when it cannot run fast?
Maybe for specialized hardened aplications like space, but I don't see these being used for mainstream applications.
• Well, SiC has a wide range for bandgap, 2.2 to 3.25 eV, which is much less stable vs. temperature than Si. This is one of its "problems" for ICs. The other is the difficulty in making large wafers. The huge benefit of its large bandgap is long minority carrier lifetimes....think standard RAM cells that can hold their charge for hundreds of years. The real focus these days for SiC has been discrete power devices since they can function with a much higher junction temperature than silicon devices. Severa
• Who needs a chip that can run hot when it cannot run fast?
Not all electronics is high-speed logic. Think about high-power thyristors and diodes.
• The BBC article (Score:3, Informative)
by Mixel (723232) on Thursday August 26, 2004 @04:34AM (#10076341) Homepage
linky []
• U lot (Score:3, Insightful)
by Anonymous Coward on Thursday August 26, 2004 @04:52AM (#10076379)
Ha you lot, you think this will be used for CPU's.
It wont. Silicon/Germanium is fastest you can get at teh mo (until they can dope diamond)
SiC will be used in hi-temp areas (eg aircraft engines) or where they want it to run hotter to up the current handling (ie power electronics)
at the mo I am limited to 800A at 1200V for an IGBT and that is 8IGBT die in parallel.the die is limited to 100A at 125C.
When I get SiC IGBT I will be able to pass 800A thorugh a single die and let the die heat up to 300C.
This will mean that expensive heavy heatsinks will be able to shrink
SiC will NOT be use for hi speed CPU!!!
duplicate /. article incoming ... estimated period of arrival: 6 years later .. please update your calendar for Aug2010
• by mikael (484) on Thursday August 26, 2004 @07:00AM (#10076708)
The good news, your graphics card can be overclocked to 2 Terahertz, and still remain operational at over 650C.
The bad news, is that the aluminum casing of your PC will melt at this temperature, so your PC will need te be built from titanium.
• Now the chip's will get hot enough to ignite combustibles (paper, plastic insulation, dust) and still operate. Then you'll cut your hand on the edge of the SiC chip as you're trying to put out the fire...
• If I remember correctly diamond chips are interesting because they can easily bind to organic molecules. I believe I saw a sample chip made by some students and Sumitomo is into it too.
Does silicon carbide have any such properties? (i.e. anything besides heat resistance?)
The flip side of course is for high temperature operation which I think is a bit scary, maybe the chip itself can handle it but what about the stuff next to it? I would rather have lower temperature circuits. As it is only a very tiny vo
• First someone sends in a story while under the impression that aluminum == alumina, now we have silicon carbide == charcoal. Somebody sound the gong, please.
| <urn:uuid:af5bd1d7-14fc-48ca-b7f0-5301f653f55c> | 2 | 2.21875 | 0.114723 | en | 0.951315 | http://tech.slashdot.org/story/04/08/25/2131220/new-solution-for-your-transistor-bbq?sdsrc=nextbtmnext |
Why this ad?
Skip navigation
no spam, unsubscribe anytime.
Skip navigation
Pink duck call unexpectedly raises money for breast cancer research
Like most people, Anne Cross May and her husband, Jim May, owners of Kum Duck Calls in Rickreal, Oregon, have been touched by the effects of cancer. Still, the couple never expected to be raising money and awareness for the cause when they created a pink version of a duck call.
The Mays designed the hunting tool as a way to show their love for their 4-year-old granddaughter, Vivian Rose, according to OregonLive.com. The item, which was named Wild Rose, was soon put up for sale in their store.
However, it was only a short time later that the couple received a note from Joe Reinhardt of Reno, Nevada, explaining how the product had quite the impact on a breast cancer fundraiser.
"Our local retriever club holds an annual AKC hunt test and we have a raffle [and] auction during the event. We were given a call and I decided to auction it off to support cancer research," Reinhardt wrote. So far the initiative has raised more than $6,000 for the American Cancer Society and the Susan G Komen for the Cure.
According to BeautyBeyondBreast.com, the color pink became synonymous with breast cancer in 1991, when cosmetics giant Estée Lauder, Self Magazine and breast cancer patient Charlotte Hayley started the pink ribbon campaign.ADNFCR-2795-ID-19936225-ADNFCR
Share this page and help fund mammograms: | <urn:uuid:0b87d75a-2776-4589-9c44-458a13d18f3e> | 2 | 1.617188 | 0.020926 | en | 0.962197 | http://thebreastcancersite.greatergood.com/clickToGive/bcs/article/Pink-duck-call-unexpectedly-raises-money-for-breast-cancer-research630 |
01/05/2012 - 06:30
Staying in the zone
Mark Jansson
Mark Jansson
Jansson is the deputy director of the Project on Nuclear Issues at the Center for Strategic and International Studies.
Few things have gone right since the 2010 Non-Proliferation Treaty Review Conference called for a 2012 meeting to discuss establishing a WMD-free zone in the Middle East. Evidence of nuclear weapons-related research in Iran has continued to mount, putting the world, and particularly Israel, on edge. By now, the enthusiasms of 2010 seem almost quaint. Given the circumstances, it would be tempting for the United States to adopt a less-than-eager attitude toward the 2012 conference, despite being a co-sponsor of the initiative.
But that logic has things exactly backward. The challenges facing the region do not preclude a regional security dialogue; they demand it. While sanctions, embassy stormings, downed reconnaissance drones, sabotage, assassination plots, and International Atomic Energy Agency reports grab headlines and raise concerns that we might be "sleepwalking" into another war in the region, the underlying truth is that establishing a WMD-free zone in the Middle East is more important than ever. This zone would proscribe the stockpiling, use, sale, and transit of nuclear, chemical, and biological weapons and weapons-related technology across the region. Working toward this goal offers an approach to improving regional security that is not just viable but also preferable to arming states to the teeth in order to bolster deterrence. Without the zone, the Middle East will only continue to become more dangerous.
The conference represents an opportunity to begin a process of cultivating trust among states, to demonstrate America's commitment to eliminating WMD throughout the Middle East, and to re-assert diplomacy as the leading edge of US foreign policy in the region. So what can the United States do to reframe the WMD-free-zone debate, inject energy back into the conference process, and make US diplomacy credible on this issue? Here's one idea: Ratify the protocols to the Pelindaba and Rarotonga treaties.
Validating WMD-free zones. The protocols to the Pelindaba and Rarotonga treaties established nuclear-weapon-free zones in Africa and the South Pacific respectively, and each has been sitting on the Senate's treaty docket since May. The treaties call on nuclear weapon states to make legally binding commitments to acknowledge the zones and to refrain from threatening to use nuclear weapons against the states within them. Despite the facts that these protocols have been ratified by every other nuclear weapon state and that the US Defense Department has issued similar guarantees, the ratification process has stalled in the United States Senate. Some senators believe that refraining from using nuclear weapons against states within nuclear-weapon-free zones constitutes accepting "limits" and is therefore symptomatic of a "deeply flawed" policy approach.
But it almost goes without saying that the efficacy of a WMD-free zone in the Middle East, like nuclear-weapon-free zones elsewhere, is undermined when nuclear powers disregard efforts by others to work cooperatively in taking the nuclear factor out of their regional security equation. Besides, if regional security is important enough for the United States to tacitly accept Israel's nuclear opacity and fatalistically discuss future proliferation after an Iranian nuclear breakout, then it should at least warrant the support of efforts by states in other regions who have found another way.
In addition to stemming nuclear expansion in Africa and the South Pacific -- an incredibly important step in and of itself -- ratifying the Pelindaba and Rarotonga treaties would send a clear message to the international community and to the Middle East that the United States subscribes to a view of regional security that is more comprehensive than enforcement of the Non-Proliferation Treaty.
Re-asserting leadership. Ratification would also help ameliorate doubts that have emerged about the level of US commitment to the conference. An early misstep occurred when two of the three countries the United States initially proposed to host the conference -- Canada and the Netherlands -- were each a member of NATO, a self-proclaimed "nuclear alliance." Some Arab countries understandably took this as a sign that the United States was missing the point.
This view was reinforced in August when the top American delegate to a high-level conference-planning meeting, White House WMD czar Gary Samore, backed out at the last minute. Samore claimed to have scheduling conflicts, but confidence in US support for the conference took a blow nonetheless. Then, when the announcement that Finland was selected to host the conference finally came out in October, it was through a terse UN press release issued on a Friday -- a non-work day in the Middle East. Again, the conference seemed like more of an afterthought than a priority.
Of course, ratifying the Pelindaba and Rarotonga protocols will not undo all of this damage, but it would show that the United States takes nuclear-weapon-free zones seriously, that the zones deserve high-level attention, and that the United States intends to honor these zones -- just as it would a WMD-free zone in the Middle East. The United States can use ratification to reclaim leadership on nonproliferation and the Middle East conference.
Making diplomacy credible. We should know by now that the future of nuclear proliferation will not be determined by economic sanctions, debates about compliance, assassinations, or computer viruses designed to hamper a state's nuclear infrastructure. The future of nuclear proliferation will be determined by political decisions countries take with respect to their security. For that reason, the United States cannot afford to let its concerns about rising nuclear tension between Iran and Israel, well-founded as they may be, blind it to the opportunities afforded by the 2012 conference. Even for skeptics, cynicism about the conference is no excuse for inertia in advance of it.
While a lot of mental energy is regularly committed to the question of how to make nuclear deterrence credible, comparatively little is given to how to make nuclear diplomacy credible. If that were not the case, then perhaps it would have occurred to Senate leaders long ago that they should take advantage of the opportunities afforded by the Pelindaba and Rarotonga protocols.
Whatever the cause for delay, ratifying the protocols now would send a timely signal that these zones are viable, that the United States understands the importance of region-wide security initiatives, and that the United States is prepared to lead the 2012 Middle East conference to success. | <urn:uuid:5b05fd7c-04b7-4d8a-ae2f-5d4ba9aff938> | 2 | 2.1875 | 0.030318 | en | 0.949222 | http://thebulletin.org/staying-zone |
Kim Jong Un Sets Example For World Leaders
New reports from North Korea indicate new Supreme Leader of the country, Asia, and all parts west will now be hailed as Supreme Leader of the Pacific Ocean and all parts east. It is rather complex being a Supreme Leader in North Korea. For example, among the new duties of Comrade Supreme Leader Kim Jong un is ensuring the fertility, not merely of the soil, but of women of North Korea. He will assume these new duties every morning from 9:00 a.m. to 10:30 a.m. during which time young nubile females will allow their bodies to be impregnated with the Supreme fluid of the Supreme Leader.
Kim Jong un does not hold debates, he simply goes about being a Supreme Leader without any big deal being made of his holy position. Perhaps, it is time for other world leaders to acknowledge their need for soliciting advice from the Supreme Leader of all Supreme Leaders. He just was awarded leadership of the North Korean armed forces. Every single soldier took a solemn oath to protect the Supreme Leader from all enemies, including flies, coughs and the common cold. They will gladly allow their bodies to catch the cold in order to demonstrate undying love and loyalty to the Supreme Leader, not merely of North Korea, but of planet Earth and all the planets in our solar system. | <urn:uuid:b46123a0-0653-496c-b438-09fd0d05e3d6> | 2 | 1.632813 | 0.253724 | en | 0.953616 | http://theimpudentobserver.com/world-news/kim-jong-un-sets-example-for-world-leaders/ |
Contact theSOPAbout theSOPSupport theSOPWritersEditorsManaging Editors
theSOP logo
Published:February 28th, 2009 10:55 EST
Best Idea to Change the World
Best Idea to Change the World
By Shakti Ghimire
Generally, we have a strong belief in respecting fundamental civil liberties, long lasting peace and Democracy. Our initiative should be spin around strategic philosophy, trouble solving and preparation agendas under the issue of peace, not on putting too much stress on provisional conflict solutions.
People have to realize that peace is a procedure in itself and creation and development of something new is possible only in long lasting peace. We have to be educated about it. Similarly, to raise awareness, peace, justice and democracy is the main message for the people of the world. In order to develop peace in the world, its residents need to be safe and should be able to trust the government or authority that protects them.
In today`s world, where the people in the power are so corrupt and the safety of the citizens is hindered, it is very difficult to provide peace a chance. When citizens of the world face unfair situations they are bound to be violent and the nation starts losing signs of peace. I think the concept of justice has become more prominent in today`s world where terrorism is one of the main threat countries faces. Justice can be best achieved by identifying terrorists and punishing them severely. Democracy is very important for any country to achieve development.
If a country cannot be democratic and is always under the rule of dictatorship and other countries, the people start feeling that they have lost their fundamental rights, freedom and independence. It can confidently state than no peace can be obtained without justice and democracy. Justice and democracy makes the development possible. If the culprits do not get punished, if the innocent suffers, if people cannot live independently, if powerful countries start dominating weaker ones, then the only possible outcomes can be violence and war.
Countries like Iraq can never experience peace until the US retracts its army from there and lets Iraq gain its democracy back, and when terrorists leave the country and its residence feel they have obtained justice and equality. In order to raise awareness about peace, we have to restore the justice system and make sure that innocent people do not suffer and are not dominated. Only after people start feeling that their voices have been heard and there can be truth and justice in the earth, then only there will be a chance for peace to prevail the earth.
The present situation is that most of the countries are spending their fund on manufacturing weapons and are involving in war rather then spending it on education or health care or in other society development issues. There is an extreme poverty and hunger throughout the world. In every country certain people are below the poverty line. The fund which the government has should be used in such a way that poverty can be eradicated. After the poverty is eradicated only then a country can be developed. So instead of spending the fund only on weapons and wars, the fund has to be equally distributed and spend in all sectors.
The health status of an individual depends on the services provided by the country. The health sector is very important and is the first priority for everyone. Hence, the nation should uplift the service of health care. We can take an example of Nepal, where the health care is at a minimal level. In the rural area there is no facility of health care. People still go to traditional healers instead of going to the doctors. In the village there are no facilities hence there is no doctors. If the government spends funds on providing facilities then the health of the citizens would be good. Instead of spending the funds on only one aspect, all the aspects should be given equal importance.
Until and unless the population of the country is educated, the country cannot develop. The school should be built in rural parts of the country. Quality education should be provided to all the children.
Ignoring other sectors like health, education, transportation, communication, and development the government should stop using on buying weapons but also focus on other aspects.
The world today has been facing various problems. There should be ways made to solve the present problem. To stop the problems or let the problems not occur again steps should be taken earlier. The influential people should now re-think how the problems can be eradicated. Hence they should start thinking from the base. The leaders, the officials and the industrialist should now think from the root. If the root of a plan is strong then the plan will be successful. Plans should be made for development. The leaders are the one to give direction to the country. Hence they should have a clear vision on what the citizens of the country need and where the country should be taken. For all these they should think from the root.
According to the Dr. Martin Luther King, a nation that continues year after year to spend more money on military defense than on programs of social uplift is approaching spiritual death. " The amount of funds being spent on weapons is increasing every year. In 2003 the world`s military spending increased by 11 percent. Even though the majority of the military expenses occurred in the US due to its war on Iraq, there is an even increase in military expenses throughout the world. In a developing country like Nepal, the budget has allocated maximum funds for the military purpose. Majorities of the funds spent in military issues are directly related with purchasing more weapons for destruction. The effect of this is the decrease in funding for other issues which are much more important to society. The society can benefit much more greatly if the money spend in buying weapons was used for education.
The state of public schools today around the world is full of crime and the standard of education had decreased in the past few years. The number of weapons that children have access to today is doubling. In developed countries, almost every week, you can hear the case of a child bringing a gun to school and shooting other school children. There are millions of children around the world who are deprived of normal childhood, they have to struggle to survive, and mal-nutrition is a serious defects child in developing countries face.
Educational institutions need higher standards of education; public schools do not have proper funding so the students in those institutions cannot get proper education. There is scarcity of water and problem of droughts in many parts of the country, there is natural disasters occurring in every part of the world, billions and billions of people do not even get to eat a healthy meal once a day. But all these social issues are neglected and majority of the funding every nation receives is spent on building their military power and buying more and more weapons everyday.
Developing international mechanism and presser group for conflict resolution and respect of human rights are another ways to keep peaceful world. There is a lot of injustice happening in the world. It is clearly stated in the United Nation`s declaration of human rights that every individual should be treated equally. But there are still many issues, which tend to differentiate and treat a group superior to others. There are many people who are killed and treated badly due to discrimination of any sorts, be it religion or racial.
Religious discrimination has led to thousands of Jews being massacred in the holocaust. There is a never stopping conflict going on in India and Pakistan between the Muslims and the Hindu. The African Americans are still isolated and looked down upon in the most developed country of the world, United States of America. Everyday people are dying in South Asian countries like India and Nepal due to racial discrimination. The rich and the elite always have looked down upon the poor. The poor and the unemployed have always been deprived of benefits that other people have used.
The best solution to stop such conflicts and to guarantee that the concept of human rights is followed throughout the world, it is necessary to develop an international group which can give equality and justice to the citizens of the world. The UN has initiated this effort of promoting human rights. This led to the establishment of Universal declaration of human rights by the United Nations in December 1948. The declaration promotes equality and the right for every human to live equally. But this declaration is not strong enough to enforce all that is stated. The declaration is old and needs to be updated too.
Fewer nations are following the declaration of human rights and thus have led to conflicts and suppression of some people. There to resolve these conflicts present worldwide and to ensure that every one experiences human rights, it is necessary to develop an international mechanism, which is adopted by all the nations. Not only will this group have the power to empower those that have been suppressed and give them their rights, but will also have the power to punish those that violate human rights. This will lead to less conflict because everyone is treated equally and there is hardly any issue to blame one another.
Another significant point is that we should make forum for education, peace, advocacy, skills development and so on. Due to lack of forum, employment and opportunities the youths are unengaged. Forum can help the engaged youths and empower the youth, and there will be less chance for them to be involved in conflict. When the youths are employed then their minds will not be idle.
This forum will help in peace building and peace keeping as well. Skills development programs are a must in today`s world. If the rural people are trained in certain skill development programs they can sustain their livelihood. Forum for advocacy aware women and they can get their right and fight for their rights.
Subscribe to theSOP's World feed.Subscribe to theSOP's World audio podcast.
Subscribe to Shakti Ghimire feed. | <urn:uuid:eeb0dd12-6c40-4999-9a44-c456fb72c75b> | 2 | 2.15625 | 0.128572 | en | 0.960987 | http://thesop.org/story/world/2009/02/28/best-idea-to-change-the-world.php |
The attackers behind the Flame malware used a collision attack against a cryptographic algorithm as part of the method for gaining a forged certificate to sign specific components of the attack tool. Microsoft officials said on Tuesday that it’s imperative for customers to install the update issued for the problem on Sunday, as it’s possible for other attackers to exploit the same vulnerability without using the collision attack.
Cryptographic hash algorithms are designed to produce unique results for each input. If an attacker is able to find two separate inputs that produce the same hash as outputs, he has found a collision. Two of the more popular hash algorithms, MD5 and SHA-1, both have been found to be vulnerable to collisions. SSL certificates, like the one that the Flame attackers forged to sign the malware, use digital signatures, which can be vulnerable to hash collisions.
Microsoft officials said that there is still quite a bit of danger to customers, outside of the Flame malware itself.
The Flame attackers used the forged Microsoft digital certificate to perform a man-in-the-middle attack against victims, impersonating the Windows Update mechanism and installing malicious code instead. Reavey said Microsoft is preparing to change the way that Windows Update works in response to the attack.
The possibility of attacks against Windows Update have been a serious concern for Microsoft officials and customers for many years now. Real-world attacks had not surfaced until the information about the Flame mechanism surfaced. But the way that the Flame attackers used their forged certificate was interesting. They used it to create a fake update server inside an organization that’s been compromised, and then downloading the malicious code to other machines, spreading the malware.
The way that Flame spread among machines had been a mystery until researchers discovered the use of the forged certificate.
Categories: Microsoft
Comments (3)
1. Anonymous
Hard to believe that a collision attack was used. Is there a proof?
If a collision attack was used, the consequence is that any certificate can be faked, no certificate can be trusted. That would lead to the end of CAs.
2. Anonymous
The industry has moved away from MD5, except for legacy support, for this very reason. If you obtain a certificate today, the hash algorithm used will be SHA-1 or SHA-256. It is not the end of CAs, just the end of MD5 hashing in certificates.
SHA-1 also has a collision attack (not yet demonstrated to be practical) and this is why SHA-256 will be replacing it over the next few years in all CAs.
Comments are closed. | <urn:uuid:d9f2d453-0d1a-4c36-b693-c8c1ded2141e> | 3 | 2.734375 | 0.023673 | en | 0.931184 | http://threatpost.com/flame-attackers-used-collision-attack-forge-microsoft-certificate-060512/76648 |
10.6. Shrinking a drive link
Shrinking a drive link has the same restrictions as expanding a drive link. A drive link object can only be shrunk by removing sectors from the end of the drive link. This can be done in the following ways:
The drive link plug-in attempts to orchestrate the shrinking of a drive-link storage object by only listing the last link object. If you select this object, the drive link plug-in then lists the next-to-last link object, and so forth, moving backward through the link objects to satisfy the shrink command.
If the shrink point is the last storage object in the drive link, then you shrink the drive link by interacting with the plug-in that produced the object.
There are no shrink options. | <urn:uuid:23f3711c-c5e1-4136-9163-41017d915953> | 2 | 2.203125 | 0.314539 | en | 0.890005 | http://tldp.org/LDP/EVMSUG/html/shrinkdrivelink.html |
REHANCE: The Director’s Cut
In order to better explain what REHANCE does and why we think it’s great, we created a page to give a basic rundown. Since we still get questions about how REHANCE works, I wrote a more thorough description below to sate those of you with a true thirst for knowledge!
The REHANCE process is a more environmentally-friendly, higher quality alternative to traditional t-shirt printing methods that is unique to TS Designs. Before explaining how REHANCE works, it would be helpful to review how the vast majority of textile printing is accomplished.
Traditional Printing
There are two main formats of screenprinting ink – plastisol and waterbased. A typical screenprinter will take a dyed shirt (which is to say, a shirt that is already a color) and print it with plastisol inks. Plastisol inks are all-around nasty. They create a surface coating on the shirt that feels like plastic (surprise!) leaving the fabric covered with an uncomfortable, rubbery print that will eventually crack and peel off the shirt.
These inks also almost always contain PVC and phthalates; the former emits dioxins (a very potent environmental toxin/pollutant) during manufacture/disposal and the latter are known to cause various negative health effects.
Waterbased inks, on the other hand, soak into and become part of the shirt. They are more permanent, will never crack, peel, or fade, and leave the fabric completely breathable. They also contain no PVC or phthalates.
So why is most printing performed with plastisol over waterbased inks? Because in order to print a light color on a dark shirt, a surface coating must be used.
Analogously, consider watercolor paint vs. latex paint. If you have a black piece of paper and try to paint a light blue watercolor paint on it, the result is less than impressive. The paint will soak into the paper, but since the paint is translucent and does not sit on top of the paper, no color is perceived.
On the other hand, if you paint that piece of paper with latex paint, a paint that is opaque and will sit on the surface of the paper, the color will be bright and vibrant. But you’ll also be able to feel that surface coating, and could chip it away with your fingernail if you tried. This is essentially the same difference between waterbased and plastisol inks.
So while waterbased inks feel better and are more environmentally-friendly, they don’t work well on color shirts. On the other hand, plastisol is less comfortable and harsher to the environment, but easier to work with and more versatile.
REHANCE is the solution to this problem.
The Solution
The REHANCE process utilizes a specially-formulated waterbased ink that resists dye, which means we must print white shirts and then dye them a color (rather than printing on a shirt that is already a color).
Using our example above, we would take white shirts, print them with the REHANCE chemistry, and then garment dye the shirts black. The printed ink will essentially ‘seal’ the area it’s printed on and prevent the color from dyeing or staining that area. As a result, a white print is visible on the shirt. However, this white print is not a surface coating – it is simply the lack of black dye.
Think of it as using painters tape before painting a wall. Tape over the area you don’t want to paint, then peel the tape off afterward and voila! No color. REHANCE essentially works the same way, except there’s nothing to peel off afterward.
The Specifics
Reactive dyes chemically attach to cellulosic compounds (e.g. cotton) on the molecular level by creating a covalent bond between the dye molecule and the hydroxyl groups of the cellulose molecule. On the wild off-chance that you have no idea what that means, I’ll explain in a bit more detail.
When I say “hydroxyl groups,” I’m referring to a single oxygen atom bonded to a single hydrogen atom. In the diagram below of a cotton molecule, the hydroxyl groups are everywhere you see OH.
During the dye process, reactive dyes will bond to these hydroxyl groups to create a permanent color on the shirt. The chemist who developed REHANCE refers to these hydroxyl groups as “dye sites.” The REHANCE chemistry bonds to those hydroxyl groups before the dyes have a chance to. Take away the dye sites in a printed pattern, and no color will bond to the areas printed.
Even Better
So what I’ve just described allows us to achieve a white print on a dark shirt without using a surface coating. But what if I want, say, a light blue print on a dark shirt? Never fear! The REHANCE chemistry can be printed over a waterbased ink to protect that ink from the color the shirt will eventually be dyed. So first: print ink color, second: print REHANCE over ink color. From there, it works exactly the same way as described above, except that the fabric has an ink printed on it before the REHANCE chemistry bonds to the dye sites.
If you’re asking yourself “How can ink be printed in the same place as the REHANCE chemistry and they don’t conflict with each other?” then worry not, for I will make all things clear. Inks, unlike dyes, are affixed to cotton in a completely different way. Normal inks do not bind to hydroxyl groups, so there is no conflict among the inks and REHANCE chemistry for dye sites.
No PVC; no phthalates; no cracking, fading, or peeling; no petroleum products; no rubbery, sticky print across your chest. Just a completely breathable, permanent, colorful print. You could even iron the shirt if you were so inclined (though why anyone would iron a t-shirt is beyond me).
It’s also worth noting that REHANCE printing leaves less stuff on the shirt in general. REHANCE works by stopping stuff from being put on the shirt, whereas plastisol printing adds stuff (ink) on top of the shirt to cover up even more stuff (dye).
Not Discharge
For those familiar with textile printing, it’s important to know that REHANCE is not discharge. Discharge works by taking an already-dyed shirt and printing zinc formaldehyde sulfoxylate to blast that dye out of the shirt. Rather than our method, which prevents dye from bonding to the shirt in the first place, discharge uses harsh chemicals to eliminate the color after it’s already affixed to the cotton.
Learn More
REHANCE is the technology that allows us to stop dye from sticking to fabric in a targeted manner. To learn more about the waterbased inks we use to print the design colors, check out this page.
Tags: , , , , , , , | <urn:uuid:b140e6df-c17c-442b-9426-9d23e53715d3> | 2 | 1.921875 | 0.224226 | en | 0.90165 | http://tsdesigns.com/rehance-the-directors-cut/ |
''Young Einstein'' is an intentionally inaccurate portrayal of UsefulNotes/AlbertEinstein as the son of an apple farmer in Tasmania in the early 1900s. In this movie, Einstein splits a beer atom (with a chisel) in order to add bubbles to beer, discovers the theory of relativity and travels to Sydney to patent it. Here he invents the electric guitar and surfing, while romancing an anachronistic Marie Curie. He invents rock and roll and uses it to save the world from being destroyed due to misuse of a nuclear reactor under the watching eye of a typically inaccurate CharlesDarwin.
Not to be confused with ''WesternAnimation/LittleEinsteins''. Entirely unrelated to ''Film/YoungFrankenstein'', except that both are ridiculous comedies.
!!This film provides examples of:
* AffectionateParody: Of Albert Einstein, Marie Curie and Charles Darwin.
** And pretty much every other historical personage named in the film (including Sir Isaac Newton, in the self-referentially titled book Young Newton).
* AlternateUniverse
* AppliedPhlebotinum: The energy from splitting Beerium Atoms.
* ArtisticLicenseGeography: Albert wanders past Ayers Rock/Uluru on his way from Tasmania to Sydney. Of course, [[RuleofFunny he was lost.]]
* BeethovenWasAnAlienSpy
* EinsteinHair: Yahoo Serious has claimed he decided to make a movie about AlbertEinstein because he has the same hairstyle.
* FacingTheBulletsOneLiner: Rather unexpectedly, from Charles Darwin, as the bomb is about to go off, and he sits quietly while everyone else is trying to run away screaming.
--> Preston Preston: Quick! Where do I run?
--> Charles Darwin: It's an atomic bomb, Mr. Preston. ''There's nowhere to run to.''
* IKissYourHand: Creepy flavor: Preston Preston to Marie Curie.
* LandDownUnder
* {{Leitmotif}}: Albert's is 'Waltzing Matilda', Marie's is a Parisian sounding accordion ditty.
* MyHovercraftIsFullOfEels: When Preston Preston tries to speak French.
* OurLawyersAdvisedThisTrope: "The characters depicted in this photoplay are fictitious, although the names of certain historical persons were used."
* SceneryPorn: Albert's journey to Sydney, via ''lots'' of places that aren't actually on a direct route.
* SteamPunk
* {{Unobtanium}}: Beerium
* XRaySparks: While draining the energy off the 'atomic' bomb with his electric guitar.
--> Don't worry, Marie! They're only electrons! | <urn:uuid:2f13b851-8cad-4197-ae4b-e2075b59a703> | 2 | 2.109375 | 0.970112 | en | 0.889305 | http://tvtropes.org/pmwiki/pmwiki.php/Film/YoungEinstein?action=source |
Directory News Site Map Home
Jepson eFlora
Key to families | Table of families and genera
Brian Vanden Heuvel & Thomas J. Rosatti
Shrub. Leaf: ± clustered on short-shoots, simple, persistent or drought-deciduous, generally deeply 3–9-lobed, generally with ± sunken glands adaxially, margin generally not toothed, ± strongly rolled under; bases persistent, overlapping, sheathing stem. Inflorescence: flowers generally 1 on short-shoots. Flower: hypanthium ± funnel-shaped, outside hairy, partly glandular or not, bractlets small, lanceolate; sepals 5, overlapping; petals 5, white to cream [yellow]; stamens (15)20–80(125); pistils 1–7(10), simple. Fruit: achene, ± fusiform to oblong, styles persistent, ± hairy.
6 species: southwestern United States, northern Mexico. (Frederick T. Pursh, North American botanist, 1774–1820)
Unabridged etymology: (Frederick T. Pursh, North American botanist, author of Flora Americae Septentrionalis, 1774–1820)
Unabridged references: [Koehler & Smith 1981 Madroño 28:13–25; Henrickson 1986 Phytologia 60:468]
Key to Purshia
1. Pistils (3)4–7(10); styles in fruit 20–60 mm, plumose; leaf lobes (3)5(7), central not spiny at tip ..... P. stansburyana
1' Pistils 1–2; styles in fruit 5–7(10) mm, not plumose; leaf lobes 3(5), central generally spiny at tip ..... P. tridentata
2. Leaves adaxially sparsely nonglandular-hairy, sessile or sunken glands few to many; twig hairs generally glandular ..... var. glandulosa
2' Leaves adaxially densely nonglandular-hairy, sessile or sunken glands 0–few; twig hairs generally nonglandular ..... var. tridentata
| <urn:uuid:e94920c0-a217-480b-a6e6-85a76fc596fc> | 3 | 2.84375 | 0.032213 | en | 0.738364 | http://ucjeps.berkeley.edu/cgi-bin/get_IJM.pl?key=11258 |
NEW YORK (AP) — It looks like a bakery. A warm glow emanates from the windows of big, oven-like machines, and a dusting of white powder covers everything.
This space in an anonymous building in New York's Long Island City neighborhood, just across the river from Manhattan, isn't cooking up breads and pastries, however. It's a factory, filled with 3-D printers "baking" items by blasting a fine plastic dust with lasers.
When a production run is done, a cubic foot of white dust comes out of each machine. Packed inside the loose powder like dinosaur bones in sand are hundreds of unique products, from custom iPhone cases to action figures to egg cups.
Manufacturing is coming back to New York, but not in a shape anyone's seen before. The movement to take 3-D printing into the mainstream has found a home in one of the most expensive cities in the country.
New York's factories used to build battleships, stitch clothing and refine sugar, but those industries have largely departed. In recent years, manufacturing has been leaving the U.S. altogether. But 3-D printing is a different kind of industry, one that doesn't require large machinery or smokestacks.
"Now technology has caught up, and we're capable of doing manufacturing locally again," says Peter Weijmarshausen, CEO of Shapeways, the company that runs the factory in Long Island City.
Weijmarshausen moved the company here from The Netherlands. Another company that makes 3-D printers, MakerBot, just opened a factory in Brooklyn. And in Brooklyn's Navy Yard, where warships were once built to supply the Arsenal of Democracy, there's a "New Lab," which serves as a collaborative workspace for designers, engineers and 3-D printers.
3-D printers have been around for decades, used by industrial engineers to produce prototypes. In the last few years, the technology has broken out of its old niche to reach tinkerers and early technology adopters. It's the consumerization of 3-D printing that's found a hub in New York. The technology brings manufacturing closer to designers, which New York has in droves.
Shapeways' production process is fairly simple. Anyone can upload a 3-D design to Shapeways' website and submit an order to have it "printed" in plastic at the factory. The company charges based on the amount of material a design uses and then ships the final product to the customer. IPhone cases are popular, but many items are so unique they can only be identified by their designer, such as the replacement dispenser latch for a Panasonic bread maker. There's an active group of designers who are "Bronies" — adult fans of the show "My Little Pony: Friendship is Magic" — who print their own ponies. The company prints in a wider range of materials, including sandstone and ceramic, at its original factory in Eindhoven, the Netherlands.
If that was all Shapeways did, the company would be little more than an outsourced machine shop. But with the help of the Internet, it's taking the business model one step further. Anyone can set up a "shop" on the Shapeways site and let people order prints from their designs. Want a replica skeleton of a Death's-head Hawkmoth? That's $15. How about a full-color sandstone sculpture of actor Keanu Reeves? He's $45.
Under the old mass production model, Weijmarshausen says, designers first need to figure out if there's a market for their product, then raise money for production, and then find a manufacturer, who usually has to custom-make dies for molding plastic. The cost can run to tens of thousands of dollars. After that, the designer must get the product distributed and find out how customers react to it.
"With the Shapeways shop, that process is completely condensed," Weijmarshausen says. "If there is no market for your product, then the only thing you lose is some time."
For its part, MakerBot is spearheading another side of the 3-D printing boom by making affordable desktop 3-D printers. About the size of a microwave oven, the printers feed melted plastic out of "print heads" that move in three dimensions, gradually building objects as the plastic cools. Instead of sending a 3-D design to Shapeways, a MakerBot owner can print an object in plastic at home, as long as it's smaller than a loaf of bread. MakerBot's printers range in price from $2,200 to $2,800.
MakerBot's factory is in an old industrial building on Brooklyn's waterfront, across the street from a Costco and a strip club. Only assembly, testing and repair is done here, so the interior looks more like a workshop than a manufacturing plant. Subcontractors elsewhere do the dirty and noisy jobs like machining of components.
The privately held company agreed in June to sell itself to Stratasys Ltd., a maker of professional 3-D printers, for $403 million in stock. Stratasys is based in Minneapolis and Rehovot, Israel, but Bre Pettis, the CEO of MakerBot, says the factory will stay in Brooklyn.
Pettis looks like a Brooklyn hipster, with his thick-rimmed glasses and upswept hairdo. The company got its start in the borough, and he says keeping the factory here is a rational economic decision. Having the engineers nearby means the company can work fast and introduce more than one new model a year, a crucial advantage in the fast-moving 3-D printing space. Pettis also notes that labor costs are going up in Asia's manufacturing hubs. "Brooklyn Pride" is also a factor.
"You can't underestimate the power of people who take pride in their work," he says.
Weijmarshausen moved Shapeways to the U.S. to get closer to its customers. He picked New York over cities such as San Francisco and Boston because of its design and fashion industry, which meshes well with 3-D printing.
Alas for New York, the consumer 3-D printing industry is still a tiny one, and there's no indication that it could singlehandedly reverse the long, slow flight of manufacturing jobs. Of the 1 million manufacturing jobs the city had at its peak during World War II, 93 percent are now gone, according to the U.S. Bureau of Labor Statistics. The Shapeways factory has 22 employees and plans to ramp up to at least 50, while MakerBot employs 274 people. And the jobs aren't necessarily well paid; Pettis says the MakerBot factory workers make "more than minimum wage."
But Weijmarshausen points out that Shapeways has the potential to provide a livelihood for many more people — successful designers. There are already hundreds of them making "substantial" money from their online Shapeways stores, but he won't reveal specific figures.
David Belt, a real estate developer whose company is refurbishing the New Lab space in the Brooklyn Navy Yard, says there's a demand for products that are made in runs of less than 10,000 units. That's too few to be economical using conventional injection-molding of plastic, but viable with 3-D printing.
One example of the combined power of 3-D printing and direct-to-consumer sales is the Spuni, a new type of spoon for babies. Boston couple Isabel and Trevor Hardy noticed that a baby taking a bite from a regular baby spoon leaves a lot of food on the utensil. Together with their friend Marcel Botha, an entrepreneur who makes medical devices, they sketched up a new spoon that "front-loads" the food, leaving less uneaten. Thanks to a 3-D printer, they had a prototype utensil eight days later, ready to test with a live baby.
"We were able to reproduce what the final spoon would look like physically at a very low cost," Botha says.
With the prototype, Botha and his partners were able to demonstrate the Spuni to buyers through a video on crowdfunding website Indiegogo. Their campaign for donations raised $37,235 — enough to start a mass production run. The spoons are being made in a traditional factory in Germany, but Botha is running the Spuni project from the New Lab in Brooklyn.
On the factory floor in Brooklyn, Adjua Greaves, 32, does quality assurance work, testing MakerBot printers before they're shipped. She used to work in publishing, a signature New York business that's been hurt by the rise of e-books. After freelancing for a while, Greaves wanted a steady job, and says she had "a romantic idea about working in a factory," partly inspired by a Sesame Street episode about the making of crayons.
"I always wanted to have a connection to a factory, but more as an intellectual observer," she says. "The romantic idea of a factory is very, very different from what it's really like in a factory, but it's really, really wonderful to be here." | <urn:uuid:6889bd1a-220a-40cc-8ff7-79660bd2a161> | 2 | 2.421875 | 0.021151 | en | 0.967235 | http://usa.news.net/article/414159/Industries |
Degenerative valvular disease common in older dogs
Degenerative valvular disease common in older dogs
Oct 01, 2003
Q. Could you provide a brief review of mitral valve insufficiency in older dogs?
A. Dr. Jonathan Abbott at the 21st American College of Veterinary Internal Medicine Forum in Charlotte, North Carolina, gave an excellent lecture on mitral valve insufficiency in dogs. Some relevant points made in this lecture are provided below.
Degenerative valvular disease exceeds 90 percent incidence in dogs older than 9 years of age and most commonly affects the atrioventricular valves.
The clinical manifestations usually result from mitral valve disease (MVD). The clinical syndrome of severe, clinically apparent mitral valve regurgitation is observed almost exclusively in aged, small breed dogs. The Chihuahua, Miniature Poodle and Toy Poodle, Pomeranian and Miniature Schnauzer are predisposed to the development of clinically consequential mitral regurgitation.
The incidence of degenerative MVD is particularly high in the Cavalier King Charles Spaniel. In dogs of this breed, MVD may be apparent at a very early age and in some individuals, the disease is severe and rapidly progressive.
How does that work?Gross structural changes of the mitral valve apparatus associated with MVD include nodular distortion of the leaflets as well as lengthening and sometimes rupture of the chordae tendinae.
These structural abnormalities prevent normal coaptation of the mitral valve leaflets and contribute to leaflet prolapse resulting in mitral valve regurgitation. When the mitral valve is incompetent, a portion of the left ventricular stroke volume is ejected retrograde into the left atrium increasing the volume and intracavitary pressure of the left atrium.
The regurgitant volume augments the pulmonary venous return that enters the ventricle during diastole. Mitral regurgitation, therefore, imposes a volume load on the left atrium and the left ventricle; dilation and hypertrophy of the atrium and ventricle follow as a consequence.
Ventricular remodeling causes further distortion of the mitral apparatus, which contributes to progressive worsening of mitral regurgitation. Afterload (the forces that oppose myocardial shortening) is low relative to the size of the ventricle and this, together with the increase in preload results in hyperdynamic ventricular performance. The altered loading conditions associated with mitral regurgitation are generally well tolerated; however, with chronicity myocardial dysfunction can develop.
The reduction in forward stroke volume associated with severe mitral regurgitation results in neuroendocrine activation. Specifically, sympathetic tone is elevated and unopposed by vagal restraint.
Additionally, the products of the renin cascade result in vasoconstriction and renal retention of salt and water. The latter contributes to volume loading of the cardiac chambers. Potentially, ventricular filling pressures rise, resulting in pulmonary venous hypertension and the development of pulmonary edema. Clinical signs of tachypnea, polypnea and cough predictably result from the presence of pulmonary edema.
In some small breed dogs with mitral regurgitation, cough develops in the absence of pulmonary edema. Mechanical compression of the bronchi by the enlarged atrium is likely to be an important causative factor in these cases. Reflex bronchoconstriction and mucus production associated with pulmonary venous distention may also contribute.
Signs observedCough is the clinical sign that is observed most commonly in dogs with clinically evident mitral regurgitation. Respiratory distress, syncope and abdominal distention resulting from ascites occasionally prompt veterinary evaluation. When the regurgitant volume is large, ventricular hyperkinesis results in a palpably dynamic precordium. The arterial pulse is often normal although very severe mitral regurgitation may result in a weak pulse. mitral regurgitation results in a systolic murmur that is generally heard best over the left cardiac apex.
Respiratory coughIt should be recognized that primary respiratory tract disorders, including bronchitis and tracheal collapse, often affect the dog population in which MVD is most common. Because the historical findings of cough and tachypnea are common to heart disease and respiratory tract disease, the clinical presentation of a small breed dogs with respiratory signs and a heart murmur presents a challenge. MVD is a common disorder that exhibits a broad spectrum of severity; only the minority of dogs with MVD develop clinical signs. Inevitably then, there are dogs in whom the heart murmur is incidental to the clinical presentation.
When confronted with an older small breed dog in which the primary complaint is cough, it is important to determine which, of airway disease or cardiac disease, bears the greatest responsibility for the clinical signs.
Although primary respiratory tract disease and cardiac disease can coexist, one of the two often dominates the clinical presentation. In most cases, the case history, physical examination and thoracic radiographic examination provide the information needed to make this important clinical distinction.
Other cluesA history of months or years of cough that occurs in the absence of dyspnea tends to support a diagnosis of airway disease. When cardiac disease is sufficiently severe that it becomes clinically apparent, it is generally progressive. Therefore, untreated dogs in which cardiac disease plays an important role tend to have a relatively short history; often, the clinical course progresses to include dyspnea.
The character of the dog's cough can also provide some diagnostic information. In general, a loud "hacking" cough is most often associated with diseases that affect the large airways such as extraluminal compression of the mainstem bronchi, chronic bronchitis or collapsing trachea. In contrast, cardiogenic pulmonary edema may cause a soft cough that is often associated with dyspnea.
The body condition of the dog can provide useful clues. Generally, dogs that cough due to heart disease or heart failure are thin. While exceptions certainly occur, obesity suggests that respiratory tract disease is primarily responsible for clinical signs. The vital signs may also be useful in distinguishing dogs that suffer primarily from cardiac disease from those with respiratory disease. Careful examination of the cardiac rate and rhythm is essential.
Healthy dogs often have a respiratory-associated arrhythmia that is evident on auscultation. In accordance with phasic variations in autonomic traffic, the heart rate increases during inspiration and decreases during expiration. This respiratory-induced sinus arrhythmia results primarily from fluctuations in vagal tone. When cardiac performance is impaired, vagal discharge is inhibited and sympathetic tone becomes dominant. Thus, in many dogs with clinical signs related to cardiac disease, tachycardia develops, and there is loss of physiologic respiratory-induced sinus arrhythmia. The clinical finding of respiratory-induced sinus arrhythmia is virtually incompatible with a diagnosis of heart failure and uncommon in dogs with severe heart disease. In contrast, many dogs that cough primarily due to primary respiratory disease have preserved and sometimes accentuated sinus arrhythmia.
What elseIn older small-breed dogs, the absence of a cardiac murmur is usually assurance that the cough results from primary respiratory tract disease. When present, the intensity of a cardiac murmur is important. In general, dogs with mild mitral regurgitation have soft murmurs and an increase in the intensity of the murmur typically parallels disease progression of mitral regurgitation.
While the relationship between the intensity of the murmur and severity of mitral regurgitation is inconsistent, severe mitral regurgitation almost always results in a loud murmur. Conversely, soft murmurs resulting from MVD are seldom of clinical consequence. The information provided by pulmonary auscultation is important but is seldom specific. Crackles, for example, result from the snapping open of collapsed small airways and may be heard in the presence of pulmonary edema. However, it should be recognized that the "dry" lungs of dogs with bronchitis or airway collapse can also produce crackles.
Radiographic evaluationMVD results in enlargement of the cardiac silhouette that is roughly commensurate with the severity of mitral valve regurgitation. The cardiac silhouette is typically normal when mitral regurgitation is mild while moderate and severe mitral regurgitation result in enlargement of the left atrium and left ventricle. In the lateral radiographic projection, this is apparent as an increase in the dorsoventral cardiac dimension - the heart is "taller" than normal resulting in elevation of the caudal aspect of the trachea and potentially, compression of the left mainstem bronchus. | <urn:uuid:d9665fbf-83c0-4176-bcc1-e0954e1708f0> | 2 | 2.140625 | 0.042576 | en | 0.90135 | http://veterinarynews.dvm360.com/degenerative-valvular-disease-common-older-dogs |
Food & the environment
The food industry has a huge impact on the environment at all stages from the field to the plate. Agriculture currently accounts for around 70% of total water use in the world and conventionally grown food crops require high levels of insect/pesticide sprays which affect the health of wildlife, waterways and soil.
As our tastes and diets have become more varied, the distance our food travels from farm to table has also increased massively. Transporting food long distances uses up dwindling fossil fuels and produces greenhouse gases. Generally, the further food has travelled, the more processed it will be and the less nutrients it will have – fresh really is best!
There are two main issues we encourage you to think about: where is your food coming from, and what happens to it when you’re done with it.
Many households on Waiheke grow their own veges and catch their own seafood which helps to reduce food miles. Most of our food though is still ‘imported’ from overseas: Auckland and much further afield, meaning we are currently reliant on our transport links with Auckland to feed the majority of the population. The Waiheke Resources Trust would like to see a Waiheke community that is more self-sufficient, resilient, and proactive when it comes to the food we eat.
If you want to reduce the environmental impact of the food you eat think about starting a vege garden at home and reducing your household food waste. | <urn:uuid:138f3314-7544-490f-b2c1-22d6a1bc605d> | 3 | 3.015625 | 0.044491 | en | 0.96048 | http://wrt.org.nz/take-action/homes/food |
Sorry, this video has expired
Australia lags field in space policy
Updated October 09, 2012 01:15:09
Demand for satellite bandwidth has surged in the last 5 years, but Australia does not have the satellite industry or the space policy to take advantage and a Federal Government white paper this week hopes to address that.
Source: ABC News | Duration: 3min 24sec
Topics: industry, business-economics-and-finance, telecommunications, road-transport, mining-industry, australia
TICKY FULLERTON, PRESENTER: The space rhas been on for decades, but critics say Australia is lagging the field.
This week the Federal Government is putting its satellite utilisation policy up for discussion.
But while Australia is yet to launch a space policy, surging demand has seen prices for satellite bandwidth double over the past five years.
Neal Woolrich reports.
NEAL WOOLRICH, REPORTER: Australian has a long connection with the space race. From the Parkes telescope helping beam back pictures of the original moon landing through to the launch of Australia's first satellites in the 1980s. But until now, Australia has never had a formal space policy.
BRETT BIDDINGTON, CHAIR, SPACE INDUSTRY ASSOC. OF AUST.: I think it's fair to say that the space-faring nations of the world think that Australia hasn't perhaps pulled its weight. I think there's a view that we've been a bit supplicant in our relationship to the United States. We've let them do the heavy lifting on the allied policy pieces.
NEAL WOOLRICH: The Federal Government is looking to rectify that as it develop what it calls a satellite utilisation policy and those in the industry say the need for a coordinated policy has never been more pressing.
DAVID BALL, CHIEF TECHNOLOGY OFFICER, NEWSAT: We've seen a lot of competition for space. There's a lot of connectivity needed from Australia to the Middle East, Australia to South Asia, for our customers. And we're finding it increasingly difficult to buy capacity on the market. There's not a lot of supply out there of good quality capacity.
NEAL WOOLRICH: Satellites perform important roles in much of everyday life, from location services in smartphones to GPS maps and timing signals for credit card transactions. Brett Biddington says Australia has a role to play in space diplomacy to ensure those applications will work into the future.
BRETT BIDDINGTON: Space is becoming more cluttered and contested and it's very important for Australia to be able to say to the rest of the world that we are going to participate in developing new rules of the road for space.
NEAL WOOLRICH: At the same time, prices to access satellites are soaring as the supply of bandwidth fails to keep up with the surging demand. David Ball is chief technology officer for NewSat, which provides satellite capacity to mining, oil and gas firms as well as to government. NewSat leases its capacity from other satellite owners, but late next year the company will be launching its own satellite to cope with the surge in demand.
DAVID BALL: A lot of user appetite for bandwidth you see going up and up, people want larger pipes, faster connectivity and that means more and more space (inaudible) needed. So it's a pretty challenging time to try and buy space. And we've probably seen space segment prices go up about 100 per cent over the last five years.
NEAL WOOLRICH: David Ball says Australia is well placed to develop its satellite industry and NewSat will be keenly watching how the Government's space policy develops.
DAVID BALL: I think it'll be helpful for Australia to have that singular focus at the international level to give us a voice in the international community. For too long I don't think we've had that focus from government on where our space policy is headed as a nation.
BRETT BIDDINGTON: We need to ensure that the space domain remains secure and assured so that those services can go on forever.
NEAL WOOLRICH: Consultation with the industry starts this week, but in the meantime, satellite bandwidth prices are expected to continue firming. However, it remains to be seen whether the Government has the resolve or the financial means as it pushes to deliver a budget surplus to launch a space policy that puts Australia in the race. | <urn:uuid:abca3fde-a79d-4e96-8195-446399d60e42> | 2 | 2.4375 | 0.020798 | en | 0.95715 | http://www.abc.net.au/news/2012-10-08/to-boldly-go/4302192 |
The Ancient Library
Scanned text contains errors.
On this page: Memnon
and, when curule aedile, in b. c. 60, seduced the wife -of M. Lucullus, whence Cicero, combining this intrigue with Memmius's previous hostility to L. Lucullus, calls him a Paris, who insulted not only Menelaus (M. Lucullus), but Agamemnon also (L. Lucullus). (Cic. ad Ait. i. 18. § 3 ; comp. Val. Max. vi. 1. § 13.) Memmius was praetor in B. c. 58. (Cic. ad Quint. Fr. L 2, 5, 15.) He belonged at that time to the Senatorian party, since he impeached P. Vatinius, consul in b. c. 47 (Cic. in Vatin. 14); opposed R, Clodius (id. ad .Alt. ii. 12) ; and was vehement in his invectives against Julius Caesar (Suet. Goes. 23, 49, 73; Schol. Bob. in Cic. pro Sest. p. 297> in Cic. Vatinian. p. 317, 323, Orelli) ; and attempted to bring in a bill to rescind the acts of his consulate. Before, however, Memmius himself competed for the consulship, b. c. 54, he had been reconciled to Caesar, who supported him with all his interest. (Cic. ad Att. iv. 15, 17 -, Suet. Goes. 73.) But Memmius soon again offended Caesar by revealing a certain coalition with his opponents at the comi-tia. (Cic. ad Quint. Fr. ii. 15, ad Att. iv. 16,18.) Memmius was impeached for ambitus, and, receiving no aid from Caesar, withdrew from Rome to Mytilene, where he was living in the year of Cicero's proconsulate. (Cic. ad Quint. Fr. iii. 2, 8, ad Fam. xiii. 19, ad Att. v. 11, vi. 1.) Memmius married Fausta, a daughter of the dictator Sulla, whom he divorced after having by her at least one son C. Memmius [No. 9J. (Ascon. in Cic. pro M. Aemil. Scaur, p. 29, Orelli; Cic. pro Sull. 19.) He was eminent both in literature and in eloquence, although in the latter his indolence, his fastidious taste, and exclusive preference of Greek to Roman models rendered him less effective in the forum. (Cic. Brut. 70.) Lucretius dedicated his poem, De Rerum Natura9 to this Memmius, and Cicero addressed three letters to him (ad Fam. xiii. 1—3).
9. C. memmius, son of the preceding by Fausta, daughter of Sulla the dictator, was tribune of the plebs in b. c, 54. He prosecuted A. Gabinius, consul in b. c. 58, for malversation in his province of Syria (Cic. ad Quint. Fr. iii. 1. 5, 15, 2. 1, 3. 2, pro Rabir. Post. 3 ; Val. Max. viii. 1. § 3), and Domitius Calvinus for ambitus at his consular co-mitia in b. c. 54 (Cic. ad Quint. Fr. iii. 2. § 3, 3. 2). Memmius addressed the judices in behalf of the defendant at the trial of M. Aemilius Scaurus in the same year (Ascon. in Cic. Scaurian. p. 29, Orelli). Memmius was step-son of T. Annius Milo who married his mother after her divorce by C. Memmius (No. 7). (Ascon. L c.; Cic. pro Sull. 19.) Memmius was consul suffectus in b. c. 34, when he exhibited games in honour of one of the mythic ancestors of the Julian house, Venus Genetrix. (Dion Cass. xlix. 42.)
10. P. memmius, was cited a witness for the defendant at the trial of A. Caecina, b. c. 69. (Cic. pro Caec. 10.) [caecina, No. L]
11. P. memmius regulus, was supplementary consul in a. d. 31 (Fasti; Dion Cass. Iviii. 9), and afterwards praefect of Macedonia and Achaia, in which office he received orders from Caligula to remove to Rome the statue of the Pheidian Jupiter from Olympia. (Joseph. Antiq. xix. 1 ; Pausan. ix. 27 ; comp. Dion Cass. 1. 6.) Memmius was the husband of Lollia Paulina, and was compelled by Caligula to divorce her. (Tac. Ann. xii. 23; Suet. Col. 25; Dion Cass. lix. 12; Euseb. in \
Chron.; comp. Tac. Ann. xii. 1.) Memmius died in a.. d. 63. (Tac. Ann. xiv. 47.)
12. C. memmius regulus, son, probably, of the preceding, was consul in a. d. 63. (Fasti; Tac. Ann. xv. 23 ; Gruter, Inscr. p. 8.)
13. L. memmius pollio, was supplementary consul in b. c. 49. Memmius was a creature of Agrippina's, the wife of Claudius, and was employed by her to promote the marriage of her son Nero with the emperor's daughter Octavia. (Tac. Ann. xii. 9.)
14. C. memmius, C. p., is only known from coins of the republican period, a specimen of which is annexed. The obverse bears the head of Ceres, with c. memmi, c. p. : the reverse a trophy supported by a captive, with c. memmivs imperator. This coin is of beautiful workmanship. [ W. B. D.]
MEMNON (Me^w*/), a son of Tithonus and Eos, and brother of Emathion. In the Odyssey and Hesiod he is described as the handsome son of Eos, who assisted Priam with his Ethiopians against the Greeks. He slew Antilochus, the son of Nestor, at Troy. (Hes. TJieog. 984, &c. ; Horn. Od. iv. 188, xi. 522; Apollod. iii. 12. § 4.) Some writers called his mother a Cissian woman (Kurorta), from the Persian province ,of Gissia. (Strab. p. 728 ; Herod, v. 49, 52.) As Eos is sometimes identical with Hemera, Memnon's mother is also called Hemera. [Eos.] Homer makes only passing allusions to Memnon, and he is essentially a post-Homeric hero. According to these later traditions, he was a prince of the Ethiopians, and accordingly black (Ov. Amor. i. 8. 4, Epist. ex Pont. iii. 3. 96 ; Paus. x. 31. § 2) ; he came to the assistance of his uncle Priam, for Tithonus and Priam were step-brothers, being both sons of Laomedon by different mothers. (Tzetz. ad Lye. 18.) Respecting his expedition to Troy there are different legends. According to some Memnon the Ethiopian first went to Egypt, thence to Susa, and thence to Troy. (Paus. i. 42. § 2.) At Susa, which had been founded by Tithonus, Memnon built the acropolis which was called after him the (Herod, v. 53, vii. 151 ; Strab. p, 728; Paus. iv. 81. § 5.) According to some Tithonus was the governor of a Persian province, and the favourite of Teutamus ; and Memnon obtained the command of a large host of Ethiopians and Susans to succour Priam. (Dipd. ii. 22, iv. 75 ; Paus. x. 31. $ 2.) A third tradition states that Tithonus sent his son to Priam, because Priam had made him a present of a golden vine. (Serv. ad Am. i. 493.) Dictys Cretensis (iv. 4) makes Memnon lead an army of Ethiopians and Indians from the heights of Mount Caucasus to Troy. In the flght against the Greeks he was slain by Achilles. The principal points connected with his exploits at Troy are, his victory over Antilochus, his contest with Achilles, and lastly, his death and the removal of his body by his mother. With regard to the first, we are told that Antitochus, the
3 u 2
About | First
page #
Search this site
Ancient Library was developed and hosted by Tim Spalding of | <urn:uuid:13d6d0ac-fa32-4a6c-bba2-410187f30b4f> | 2 | 1.875 | 0.049642 | en | 0.947515 | http://www.ancientlibrary.com/smith-bio/2135.html |
Sheriff: the county’s most powerful public figure
By Sheriff Tommy Allen
January 2, 2014
The New Year is here and 2014 will see many exciting things. One of those is the election of a new sheriff for Anson County. There will also be commissioner elections, state House and Senate races, and some national races for Washington offices. But few local races draw as much attention as that of the local sheriff’s race. That is as it should be, because the sheriff is considered the most powerful elected local office. The sheriff is an extremely powerful individual.
I thought I’d address the importance of this year’s sheriff’s election over the next two months by starting with some history of the office of sheriff and the duties and responsibilities. Next month I’ll write about what the public should look for and expect from their sheriff.
The office of sheriff goes back well into the first millennium. It originated in England where it first came into existence around the 9th century. This makes the sheriff the oldest continuing, non-military law enforcement entity in history. In early England the land was divided into geographic areas called shires. Within each shire was an individual known as a “reeve,’ appointed by the King, to protect the King’s interest and carry out the edicts and acts of the King. Through time and usage, the words shire and reeve came together to be the “shire-reeve,” the guard of the shire; and eventually the word “Sheriff” was used.
The term and concept of sheriff came to the new world with the American colonies. The first sheriffs were generally appointed by large land holders. Eventually most sheriffs were elected and served at the pleasure of the public they served. The early American sheriff was important to the security of the people and had much power including the execution of criminals by hanging.
As the new world expanded westward so did the Office of Sheriff. There were many interesting individuals including Augustin Washington, sheriff of Westmoreland County, Va., and the father of George Washington. There was also Wild Bill Hickcock, Pat Garrett, Bat Masterson and Wyatt Earp in towns such as Dodge City, Deadwood and Tombstone.
In the United States today there are sheriffs in all states except three. Alaska has no county government and no sheriff. Connecticut has no sheriff but a State Marshal system. Hawaii has no sheriff but there is a “deputy sheriff” division in their Department of Public Safety. Most sheriffs today are elected.
The modern day sheriff is generally considered the chief law enforcement officer in the county. In addition to criminal investigations and responding to law enforcement related call, the sheriff is responsible for operating the county jail; security in the court system; serving civil papers; and transporting prisoners and mental patients to and from various facilities. Today’s sheriff must develop good working relationships with all local, state and federal law enforcement agencies.
North Carolina has 100 counties and 100 sheriffs. One is female.
Not an untypical day in the life of a sheriff involves meeting with office staff, jail staff or 911 staff on the day’s activities; handling dozens of telephone calls and emails; dealing with walk-in visits from citizens with various issues; one, two or more meetings dealing with any number of issues; looking at budgets, signing off on purchase orders, invoices and other financial documents. The sheriff will greet judges and other court officials for the day’s one, two or more courts that may be going on. This list could continue for several more paragraphs.
Today’s sheriff runs a multi-million dollar operation like a business. He must have a good sense as to how to run that business. His product is “service” and the public are his customers. Today’s modern sheriff must be a skilled businessman, understand how to deal with both the public and his staff; have an extremely good working knowledge of both criminal and civil law; and do all this with the sense of fairness and understanding of human nature. Next time I’ll discuss what is needed to do all this and serve in the most powerful position in county government. | <urn:uuid:85d33bc5-daf6-4bf0-939c-3b2698ff9dbb> | 2 | 2.296875 | 0.053373 | en | 0.971675 | http://www.ansonrecord.com/article/20140102/news/301029980/&template=printthis |
Skip to content
Art and the Second World War
written by Monica Bohm-Duchen
Lund Humphries Publishers | ISBN 9781848220331
Hardback – 288 pages
Member’s price: $71.28
Usually ships within 2–11 business days.
Monica Bohm-Duchen's thought-provoking analysis ranges from iconic paintings such as Picasso's Guernica to unfamiliar works by little-known artists. She reinstates war art by major artists as an integral part of their oeuvres and examines neglected topics such as the art produced in the Japanese-American and British internment camps, by victims of the Holocaust, and in response to the dropping of the atom bomb in1945. In so doing, Bohm-Duchen addresses a host of fundamental issues, including the relationship between art and propaganda and between art and atrocity, and the role of gender, religion, and censorship, both external and internal.
Art and the Second World War offers an unparalleled comparative perspective that will appeal to anyone interested in art history, military history, or political and cultural studies.
Similar items | <urn:uuid:bd6b3c4c-b017-4b08-bc4f-462788ce3367> | 3 | 2.8125 | 0.127441 | en | 0.937967 | http://www.artgallery.nsw.gov.au/shop/item/9781848220331/ |
» Views
Time to up the pressure on Myanmar
Publication Date : 18-05-2014
Nobody said the road of democratic reform would be smooth and direct for Myanmar, whose latest bitter setback came with the United States extending economic sanctions against the country for another year.
President Barack Obama told Congress that despite some positive steps on reform, Myanmar must do more if the US is to give its government a clean bill of health.
The sanctions remain in place under the National Emergencies Act, which prohibits US businesses and individuals from investing in Myanmar or doing business with individuals involved in repression of democratic development.
Obama, who visited the country in 2012, said the Myanmar government had made some progress, pointing to the release of more than 1,100 political prisoners, improvement in labour laws, and the push for a nationwide ceasefire.
However, "the situation in the country continues to pose an unusual and extraordinary threat to the national security and foreign policy of the United States", he said.
Patrick Ventrell, spokesman for the White House National Security Council, announced the US had extended the penalties for another year "in order to maintain the flexibility necessary to sanction bad actors and prevent backsliding on reform even as we broadly ease sanctions".
Last year the US lifted a travel ban imposed on the rulers of Myanmar's previous military government and their business cronies. But as Ventrell pointed out, significant worries remain, including widespread and orchestrated violence against Muslims and other minority groups.
While Washington stopped short of blaming the Thein Sein administration for the violence, the sanctions nevertheless lend credibility to claims by human rights organisations and activists that the government is turning a blind eye to the attacks or perhaps even instigating them.
The recent official denial of a claimed massacre of Muslims in Du Chee Yar Tan, Rakhine State is a case in point. The claims were backed by a United Nations' investigation, not yet made public, which said at least 40 Rohingya Muslims were killed in the incident.
The massacre, coupled with the denial, add to evidence that the post-junta government is doing little to rein in militant Buddhists and other nationalist elements fomenting violence against minorities. Besides not doing enough to protect them, the government has also backed severe restrictions on the basic rights of Muslims, such as their freedom of movement in Rakhine State.
Under the watch of the post-junta government, hundreds of thousands of Rohingya have been forced out of their homes by mob violence. Some have taken their chance on the high seas, making perilous journeys that often end in refugee camps in neighbouring countries or in being sold into slavery.
The violence against Muslims is also a test case for Western countries and the donor community, which have poured aid and investment into Myanmar. So far, many have chosen to remain quiet in the hope that Nay Pyi Taw would change its attitude towards the Muslims and other minorities.
But now that Washington has taken the lead, it's time that the international community broke its silence and added to the pressure for reform. Otherwise, this ethnically diverse country of more than 60 million people is in danger of slipping back into the violent and repressive ways of its decades under military dictatorship.
Mobile Apps Newsletters ANN on You Tube | <urn:uuid:bf2a08c4-355d-4937-beb9-f196b29adf63> | 2 | 1.835938 | 0.038583 | en | 0.960934 | http://www.asianewsnet.net/Time-to-up-the-pressure-on-Myanmar-60475.html |
EDUCATION > Educational Programs > Education for Medical Students > Medical Student Curriculum > Pediatric Urinary Tract Infections
Pediatric Urinary Tract Infections
KEYWORDS: Cystitis, renal abcess, dysuria, hematuria, pyelonephritis, hydronephrosis, UTI
Learning Objectives
Pediatric UTI's are a major health care issue. Urinary tract infections (UTIs) affect 3% of children every year. Annually, pediatric UTI's account for over 1 million office visits in the U.S. (0.7% of all physician visits by children). Furthermore, each year there are approximately 13,000 pediatric admissions for pyelonephritis, with inpatient costs exceeding $180 million. Throughout childhood, the risk of UTI is 8% for girls and 2% for boys. Sexually active girls experience more UTIs than sexually inactive girls. However, during the first year of life, more boys than girls get UTI's, with a tenfold increased risk for uncircumcised compared to circumcised boys.
The anatomic location of the UTI is germane to etiology and clinical presentation. Regardless of UTI location, infants and many young children cannot describe their symptoms; hence it is critical to understand the observable signs and symptoms of infection to make the diagnosis. Lower UTI's include bladder infections (cystitis), whereas upper UTI's include pyelonephritis and perinephric and renal abscess. Cystitis is second in frequency only to respiratory infection as a reason for pediatric medical visits. Classic symptoms of cystitis include urinary frequency, urgency, dysuria, hematuria, suprapubic pain, sensation of incomplete emptying, and even incontinence. Non-specific symptoms can include poor feeding, irritability, lethargy, vomiting, diarrhea, ill appearance, and abdominal distension (Table 1). Fever and flank pain are unusual symptoms for lower UTI.
Lower Urinary Tract
Classic Non-specific Frequency Poor appetite Urgency Irritability Dysuria Lethargy Hematuria Vomiting Incomplete emptying Diarrhea Incontinence Abdominal distension
Upper Urinary Tract
Classic Non-specific Fever Poor appetite Flank pain Irritability Dysuria Lethargy Hematuria Vomiting Frequency Diarrhea Urgency Abdominal distension
Pyelonephritis, and to a lesser degree renal abscesses, typically begin as a lower UTI that proceeds to an upper UTI as the infections ascends. However, pyelonephritis and renal abscesses can also result from hematogenous spread of infection (e.g., bacteremia). Symptoms that occur with upper UTI's overlap those for cystitis, in part because cystitis is common in both. In upper UTI's, flank pain and fevers (classically intermittent and >39°) are more pronounced and important (Table 1).
Fungi and viruses can also cause cystitis in certain settings and with associated risk factors. Fungi are the second most common cause of nosocomial UTI in children, and can spread systemically and can be life-threatening. Risk factors for fungal UTI's include the use of invasive devices (IV's, drains, catheters), previous broad-spectrum antibiotic exposure, and systemic immunosuppression. A true candidurial infection can be difficult to diagnose, since it can represent colonization, contamination, or infection, and may or may not have associated symptoms. Suggestive diagnostic criteria include: Lack of pyuria and >104 colony forming units/mL (in neonates) from a urine culture obtained by urethral catheterization. The potential for candiduria to develop into invasive candidiasis in the neonatal intensive care unit (NICU) is significant. Risk factors for this progression include prematurity, congenital urinary tract abnormalities, parenteral nutrition, respiratory intubation, and umbilical artery or intravenous catheterization. Furthermore, the kidney is the most commonly affected organ in candidiasis, with "fungus balls" representing a life-threatening infection. As such, renal and bladder sonography is important in the evaluation of neonates with candiduria.
There is no consensus regarding the treatment of pediatric candiduria. Measures include stopping antibiotics, removing or changing indwelling catheters, and antifungal therapy. Commonly used antifungal agents include oral fluconazole and parenteral or intravesical amphotericin B. In patients with obstruction or failure to improve with medical management, urgent percutaneous nephrostomy tube placement to drain the kidney may be needed. Additional measures include amphotericin B irrigation of the nephrostomy tube, or even nephrectomy in severe cases.
Viral cystitis represents another form of non-bacterial UTI affecting children. Adenovirus types 11 and 21, influenza A, polyomavirus BK, and herpes simplex viruses can cause irritative voiding symptoms, hemorrhagic cystitis and even vesicoureteral reflux or urinary retention. In non-immunized or immunosuppressed children, herpes zoster cystitis presents similarly. Fortunately, these forms of cystitis are self-limited. Immunosuppressed children undergoing kidney or bone marrow transplantation, or those receiving chemotherapy are especially susceptible to viral cystitis, including those caused by cytomegalovirus and adenoviruses 7, 21, and 35. Antivirals such as ribavirin and vidarabine may be helpful when viral cystitis is diagnosed.
Acute sequelae of pediatric bacterial UTI include the spread of infection outside the urinary tract, resulting in epididymitis or orchitis in boys, and sepsis. The most common serious sequelae of pediatric UTI is that due to pyelonephritis. Chronic pyelonephritis results from persistent infection after acute pyelonephritis and can result in pyonephrosis, xanthogranulomatous pyelonephritis (XGP), and renal parenchymal scarring with hypertension and renal insufficiency. The accumulation of purulent debris in the renal pelvis and urinary collecting system, known as pyonephrosis, occurs when pyelonephritis is accompanied by urinary tract obstruction. Pyonephrosis requires appropriate antimicrobial therapy and prompt drainage of the urinary tract with percutaneous nephrostomy tube placement or retrograde catheterization.
XGP is a rare clinical entity in children affecting < 1% of cases with renal inflammation. Like pyonephrosis, it develops in the setting of chronic obstruction and infection. The most common pathogens causing XGP are Proteus and E. coli. XGP is usually unilateral and may extend diffusely throughout the affected kidney and even into the retroperitoneum and cause fibrosis of the great vessels. Radiographically, it can be mistaken for childhood renal tumors. Histologically, the XGP kidney shows evidence of pyonephrosis and xanthoma cells, which are foamy, lipid-laden macrophages. Treatment often involves nephrectomy.
Pyelonephritogenic scarring with renal parenchymal damage occurs more commonly in children than adults for unclear reasons. Renal scarring from pyelonephritis appears to be influenced by at least 5 factors: age, treatment, host immunity, intrarenal reflux, and urinary tract pressures. Future hypertension occurs in at least 10-20% of children with pyelonephritogenic scarring. Hypertension in this setting occurs independent of the degree of renal scarring.
Children with recurrent pyelonephritis may also develop progressive renal insufficiency without a UTI symptoms. End-stage renal disease from reflux nephropathy (pyelonephritogenic scarring in the setting of vesicoureteral reflux, discussed below) accounts for 7-17% of end-stage renal disease worldwide, and 2% of cases in the U.S.
Figure 1. Algorithm for management of pediatric UTI
Figure 1. Algorithm for management of pediatric UTI (From: Marotte, Lee, Shortliffe, AUA Update Series vol 24; Lesson 19, 2005).
A thorough history from parents, and the child if possible, and a physical examination are essential in the evaluation of pediatric UTI. Dipstick urinalysis is the most common initial laboratory testing, and may be the most cost-effective screen for infant UTI. Urine cultures and blood cultures (if sepsis is suspected) are the mainstays of diagnosis. Urine from bagged and voided specimens are easier for the child, but have significant false positive rates because of contamination with skin flora (up to 63% for the bag method).
Urethral catheterization and suprapubic aspiration provide the best urine specimens for the diagnosis. The standard definition for bacterial UTI from a voided urine culture is 105 colony forming units/mL.
The likelihood of UTI can also be estimated based on urine bacterial counts and collection method. The presence or absence of pyuria on urinalysis, along with a urine culture, help make the diagnosis if pediatric UTI (Figure 1). Pyuria with a negative urine culture suggests viral infection, infection with fastidious organisms such as mycobacterium or haemophilus, or noninfectious cystitis. The lack of pyuria and a negative urine culture suggests a non-infectious etiology for cystitis. A positive urine culture along with pyuria likely represents bacterial or fungal infection. A positive urine culture without pyuria may indicate contamination or an immunosuppressed host.
After establishing the diagnosis of UTI, certain children require additional testing to determine possible causes for their infection. This is important as eradication of UTI with antibiotics may not be possible without correction of underlying structural abnormalities. In addition, the early diagnosis of anatomically based UTI's can prevent or ameliorate long-term sequelae of persistent or recurrent infections. The American Association of Pediatrics has suggested guidelines for radiologic imaging of children with UTI's. Urinary tract imaging is recommended in a febrile infant or young child between the ages of 2 months and 2 years with a first documented UTI. Typically this involves a renal and bladder ultrasound and a voiding cystourethrogram (VCUG) (Figure 2).
Figure 2. VCUG in a 3 month-old showing R>L vesicoureteral reflux of contrast into the upper urinary tract (ureter and renal pelvis).
The evidence supporting the use of VCUG for older children is less compelling. Imaging is indicated if patients have known anatomic structural abnormalities, unusual uropathogens such as Proteus or tuberculosis, fail to improve with appropriate antimicrobial therapy, or have an unclear source of infection. VCUG should be performed as soon as a child is infection-free and bladder irritability has passed, since delaying the VCUG is associated with losing patients to follow-up. Other radiologic studies, including computerized tomography (CT), magnetic resonance imaging (MRI), intravenous urography (IVU), and dimercapto-succinic acid (DMSA) and Technetium (Tc)- 99 m mercaptoacetyl triglyceride (MAG-3) scans have specific indications that will be discussed further.
Antibiotic regimens for children with UTI consist of short treatment courses for acute infections and prophylaxis for chronic conditions. The most common pathogen isolated in children with uncomplicated cystitis is Enterobacteriaceae. Accordingly, frequently used antibiotics for prophylaxis and treatment include trimethoprim (with or without sulfonamide) and nitrofurantoin, which are effective in 96% of children. Prolonged antibiotic use can alter gut and periurethral flora, leading to bacterial resistance. As an example, children on antibiotic prophylaxis have a higher incidence of UTI due to Enterobacter, Klebsiella, and Proteus. In addition, widespread use of antibiotics in certain communities has led to increased bacterial resistance to trimethoprim, cephalothin, cephalexin, ampicillin, and amoxicillin. Clearly, antibiotics should be used judiciously to curb increasing bacterial resistance (Table 2).
Table 2. Commonly used oral antibiotics for treating pediatric urinary tract infections
Except for children with vesicoureteral reflux or other structural abnormalities of the urinary tract, the use of antibiotic prophylaxis is controversial. Adolescent girls who are sexually active and susceptible to post-coital cystitis are likely better served by taking short courses of antibiotic treatment when symptoms occur and taking brief, post-intercourse prophylaxis rather than using long-term, prophylactic antibiotic therapy.
Vesicoureteral Reflux
Vesicoureteral reflux (VUR) is the retrograde flow of urine from the bladder into the ureter and, often, into the renal collecting system. Approximately 40% of children with UTI are subsequently diagnosed with VUR. Primary VUR results from a congenital abnormality of the ureterovesical junction, whereas secondary VUR is caused by high pressure voiding due to neuropathic bladder, posterior urethral valves or dysfunctional elimination syndrome. VUR is also a risk factor for pyelonephritis, with potential for renal injury.
The radiographic diagnosis of VUR is primarily made based on upper tract urinary reflux observed on VCUG (Figure 2). Finding hydronephrosis on renal sonography is inconsistent and not diagnostic of VUR. DMSA scans are used to assess renal cortical function and monitor for renal scarring. Children with VUR may be managed either medically or surgically, and controversy exists regarding the optimal treatment. Medical management encompasses daily antibiotic prophylaxis and periodic radiologic reassessment of the urinary tract, since many children spontaneously resolve VUR. Surgical treatment of primary VUR includes open or laparoscopic ureteral reimplantation and subureteric endoscopic injection of various substances, including dextranomer-hyaluronic acid copolymer. Because secondary VUR has other causes than simple anatomical ones, it is imperative that these causes are ruled out before antireflux surgery.
Ureteropelvic Junction Obstruction
Ureteropelvic junction (UPJ) obstruction accounts for 64% of children born with hydronephrosis. This condition results from poor peristalsis of the UPJ or an anatomic abnormality consisting of either an "intrinsic," narrow segment with muscular discontinuity, or an "extrinsic" anatomic cause from aberrant vessels or a high insertion of the ureter into the renal pelvis (Figure 3). Presenting symptoms include hematuria, UTI, abdominal mass or pain, nausea, or flank pain which worsens with diuresis (also known as Dietl's crisis). Evaluation of UPJ obstruction includes renal ultrasonography, a VCUG to rule out VUR (33% of cases), and a MAG-3 diuretic renogram to look for delayed drainage on the affected side.
Figure 3. Example of ureteropelvic junction obstruction.
Figure 3. Example of ureteropelvic junction obstruction. (From: kidney.niddk.nih.gov)
Management of UPJ obstruction is dictated by age at diagnosis, severity and stability of hydronephrosis, severity of delayed drainage, and degree of associated symptoms. In some asymptomatic children, UPJ obstruction will resolve spontaneously with expectant management. For many children, however, surgical repair is needed through either open surgical pyeloplasty, the traditional approach, or newer techniques such as laparoscopic pyeloplasty, robot-assisted laparoscopic pyeloplasty, and percutaneous and retrograde endopyelotomy.
A ureterocele is a cystic dilatation of the terminal, intravesical portion of the ureter (Figure 4). Eighty percent of ureteroceles drain the upper pole of a duplex kidney (two collecting systems). Sixty percent of ureteroceles have an ectopic orifice in the urethra. A UTI in the first few months of life is a common presentation for a child with a ureterocele. Sometimes the obstructed upper pole drained by a ureterocele is so hydronephrotic that it is palpable as an abdominal mass.
Figure 4. Example of a left sided ureterocele.
Figure 4. Example of a left sided ureterocele.
Ureteroceles are diagnosed by ultrasonography, which typically shows a cystic intravesical mass in the posterior bladder, a dilated proximal ureter, and a hydronephrotic or dysplastic upper pole of a duplex kidney. IVP may demonstrate the "drooping lily" sign, which is a lower pole collecting system displaced downward by a dilated upper pole. This sign can also be observed on VCUG, since up to 50% of ipsilateral lower pole moieties will reflux. Treatment of ureteroceles is guided by clinical presentation and remaining kidney function. Infants and children presenting with sepsis are initially treated with endoscopic incision of the ureterocele to drain it and relieve obstruction. Ureteroceles draining nonfunctioning upper pole moieties can be treated by removal (heminephrectomy and ureterectomy) and the ureterocele itself can be removed through open reconstruction.
Ectopic Ureters
A ureteral orifice is classified as ectopic when it lies caudal to the normal insertion of the ureter on the trigone. Most (70%) of ectopic ureters are associated with complete ureteral duplication. In addition, contralateral duplication occurs in 80% of cases. Ectopic ureters insert along the pathway of the developing mesonephric duct system. Hence, in boys, the orifice can lie in the bladder neck, prostate, or epididymis. In girls, the orifice usually inserts in the bladder neck, urethra, vagina, cervix, or uterus. Boys with ectopic ureters typically present with UTI or epididymo-orchitis, depending on whether the ectopic orifice is located in the genital ducts. Infant girls often present with UTI, whereas older girls present with incontinence because the ureteral orifice is distal to the bladder neck. Abdominal ultrasonography often shows a dilated ureter draining a dysplastic or normal upper pole kidney of a duplex system. If the ectopic ureter drains a single system, the kidney may be dysplastic. VCUG often demonstrates reflux in the ectopic system, and may reveal the "drooping lily" sign. In girls, ectopic ureters can be diagnosed by placing a cotton ball in the vagina, filling the bladder with dye, and examining the ball for dye. Finally, a MAG-3 study can estimate upper pole function before embarking on surgery.
Surgical management of ectopic ureters is determined by the presence or absence of ureteral duplication, as well as by the function of the subtended kidney. Most upper pole ectopic segments are nonfunctional and are treated by heminephrectomy and ureterectomy. Ectopic ureters draining single systems can be reimplanted in the bladder if they drain functional kidneys. Otherwise, nephroureterectomy is the procedure of choice.
Neuropathic Bladder
Neuropathic bladder can be caused by spinal cord-based disorders such as myelomeningocele and traumatic spinal cord injury. Secondary reflux and incomplete bladder emptying from poor bladder function increases the risk of pyelonephritis. In spina bifida cases with neuropathic bladder, there may be sacral bony defects or simply pigmentation, dimples, lipomas, or tufts of hair. Often, neuropathic bladder due to spinal cord injury or occult spinal dysraphisms is discovered after evaluation of orthopedic problems, difficulty walking, or urinary retention, incontinence or UTI. Management of neuropathic bladder includes neurosurgical intervention, anticholinergic medication, and intermittent catheterization. These patients require particularly careful long-term follow-up of urinary tract function to prevent renal failure from obstructive uropathy.
Posterior Urethral Valves
Posterior urethral valves (PUV) are the most frequent cause of congenital bladder outlet obstruction. PUV are obstructing, membranous folds within the lumen of the prostatic urethra, and only occur in boys (Figure 5). Antenatal ultrasound showing a distended, thick-walled fetal bladder and bilateral hydronephrosis is suggestive of PUV.
Figure 5. VCUG showing posterior urethral valves (arrow) in a boy.
Oligohydramnios often indicates poor fetal renal function and can lead to pulmonary hypoplasia and postnatal respiratory distress. Clinical presentation after birth includes respiratory difficulty, sepsis, renal failure, and a distended bladder. Less affected boys can present with recurrent UTI or urinary incontinence. One half to one-third of boys with PUV also have VUR and/or renal dysplasia. Acutely ill neonates with PUV are treated by placing a small feeding tube into the bladder. Definitive early treatment consists of primary endoscopic valve ablation. Other options include cutaneous vesicostomy or bilateral ureterostomies followed by later endoscopic valve ablation. Persistent bladder dysfunction after valve ablation ("valve bladder syndrome"), is an irreversible detrusor alteration from fetal bladder outlet obstruction. The potential for valve bladder syndrome and renal dysplasia leading to renal failure in boys with PUV mandates careful life long follow-up.
Prune Belly Syndrome
Also known as Eagle-Barrett syndrome, this disorder features a deficiency or absence of abdominal wall musculature; dilation of the ureters, bladder, and urethra; and bilateral undescended testes. Renal dysplasia, pulmonary hypoplasia, poor bladder function, and susceptibility to UTI and respiratory tract infections are common. Patients are diagnosed in utero with urinary tract dilation on ultrasound, or noted at birth to have wrinkled, prune-like abdominal wall skin from lack of abdominal wall musculature (Figure 6).
Figure 6. Example of infant with prune belly syndrome
Figure 6. Example of infant with prune belly syndrome
Evaluation with VCUG to check for VUR should be considered, but catheterization may result in introduction of bacteria into a stagnant urinary tract. DMSA scans are less invasive and can evaluate renal scarring. In mildly affected patients, lifelong antibiotic prophylaxis may be necessary. More severely affected patients who survive beyond the neonatal period often also require abdominal wall and urinary tract reconstruction along with orchidopexy. In select patients, clean intermittent catheterization may be helpful.
Urachal Remnants
The urachus is the remnant of the allantoic duct, which extends from the anterior bladder wall to the umbilicus. Typically, the urachus obliterates into a fibrous band, but on occasion some or all of this structure persists. A patent urachus may result from bladder outlet obstruction, but more commonly is not associated with other anomalies. The classic presentation is a neonate with a constantly wet umbilicus that leaks during crying or straining. Partially involuted urachal remnants can present later in childhood with infection or growth from accumulation of desquamated tissue. Symptoms include pain, fever, umbilical drainage, periumbilical mass, and UTI. Abdominal ultrasound and VCUG typically reveal urachal remnants, and contrast fistulography may also delineate these structures. Surgical resection is the treatment of choice.
Urinary Stones
In the U.S., urinary calculi occur more often in children from metabolic disorders, whereas in Europe, they tend to occur more frequently in children with UTI. Known metabolic abnormalities that predispose to stone formation include hypercalciuria, hyperoxaluria, hypocitraturia, hyperuricosuria, cystinuria, and low urine volume. Symptoms can include fever, dysuria, frequency, urgency, flank pain, hematuria, and UTI, although flank pain often is not seen in children under the age of 5. Approximately 78% of pediatric stones are located in the kidney. The most common stone types, in order of frequency, are calcium oxalate, calcium phosphate, and struvite. Renal and bladder ultrasound can identify stones, although in larger children, distal ureteral stones may be difficult to see. A KUB can reveal most stones, although pure uric acid stones are radiolucent. CT scans show nearly all stones and also provide anatomic detail that may be useful for operative planning. Spontaneous passage of stones occurs in up to half of children within 2 weeks of diagnosis. Otherwise, removal or lithotripsy of obstructing calculi can be performed through shock wave lithotripsy, percutaneous nephrolithotomy, or cystoscopy and ureteroscopy for bladder and ureteral stones, respectively. Long-term prevention of stones depends on the exact metabolic abnormality but often includes increasing water intake and decreasing salt intake.
Sexual Abuse
An estimated 1 in 4 girls and 1 in 10 boys will suffer sexual abuse before adulthood, and there are no predictive socioeconomic factors. Sexual abuse causing UTI should be considered in children with genital, perineal or anal bruising, abrasions, or lacerations. Abused children may also present with secondary incontinence (i.e., wet after at least 6 months of continence), low self-esteem, and a pathologic fear of examination. Suspected cases of sexual abuse must be reported to child protection services.
Dysfunctional Voiding Syndrome
Dysfunctional voiding syndrome refers to dysfunction of the lower urinary tract in the absence of any apparent organic cause. In broad terms, dysfunctional voiding is lack of coordination between bladder muscle (detrusor) function and external sphincter activity. Two major categories of children with dysfunctional voiding are those with "lazy", high capacity bladders with little sensation and contractile activity, and those with overactive bladders that lead to frequency and urgency. Dysfunctional voiding in children with overactive bladders is thought to be due to poor cortical control over inhibition of reflex bladder contractions. Certainly, behavior is crucial to the pathophysiology of most types of dysfunctional voiding. Dysfunctional voiding can lead to secondary VUR, and may be exacerbated by chronic constipation because of alterations in pelvic floor activity caused by impacted stool. These factors are thought to contribute to bacteriuria and UTI. Diagnostic studies that are helpful in children with dysfunctional voiding include renal sonography, which can detect hydronephrosis in severe cases, VCUG, which can reveal VUR, a KUB, which can show impacted stool, and urodynamics. Treatment of dysfunctional voiding consists of behavioral modification (i.e., timed voids), bowel regimens, anticholinergic medications, and short-term prophylactic antibiotics.
1. UTI's affect many children and have a significant healthcare impact.
2. Bacterial UTI's are associated with structural abnormalities of the urinary tract and also with acquired causes such as dysfunctional voiding, urinary stones and sexual abuse.
3. Fungal UTI's have associated risk factors that include immunosuppression, underlying structural abnormalities of the urinary tract and invasive lines.
4. Bacterial pyelonephritis carries the risk of renal scarring and subsequent renal insufficiency and hypertension.
5. The most commonly radiologic studies for children with UTI are renal and bladder ultrasound and VCUG.
6. Antibiotic treatment and prophylaxis are effective in treating and preventing UTI, but inappropriate antibiotic use has led to an increase in bacterial resistance.
7. UTI may be a sentinel event signaling the existence of an underlying congenital urinary tract abnormality, and the differential diagnosis must include this possibility.
Greenfield SP: Vesicoureteral reflux. AUA Update Series 2007; vol 26, lesson 4, pp 30-40
Freedman, A. L.: Urologic diseases in North America Project: trends in resource utilization for urinary tract infections in children. J Urol, 173: 949, 2005
Malek, R. S., Elder, J. S.: Xanthogranulomatous pyelonephritis: a critical analysis of 26 cases and of the literature. J Urol, 119: 589, 1978
Craig, J. C., Irwig, L. M., Knight, J. F. et al.: Does treatment of vesicoureteric reflux in childhood prevent end-stage renal disease attributable to reflux nephropathy? Pediatrics, 105: 1236, 2000
Schlager, T. A.: Urinary tract infections in children younger than 5 years of age: epidemiology, diagnosis, treatment, outcomes and prevention. Paediatr Drugs, 3: 219, 2001
Term of Use
Site Map | <urn:uuid:f245f09c-1ad3-4aae-82a5-a6daf443887c> | 3 | 3.3125 | 0.022505 | en | 0.891982 | http://www.auanet.org/education/pediatric-urinary-tract-infections.cfm |
ASA 124th Meeting New Orleans 1992 October
1aPA6. On the propagation of plane waves in dissipative anisotropic media.
Jose M. Carcione
Osservatorio Geofisico Sperimentale, P.O. Box 2011 Opicina, 34016 Trieste, Italy
Hamburg Univ., Germany
Fabio Cavallini
Osservatorio Geofisico Sperimenale, Trieste, Italy
A theory for propagation of time-harmonic fields in dissipative anisotropic media is not a simple extension of the elastic theory. Firstly, one has to decide for an appropriate constitutive equation that reduces to Hooke's law in the elastic limit. In this work, one relaxation function is assigned to the mean stress and three relaxation functions are assigned to the deviatoric stresses in order to model the quality factors along preferred directions. Secondly, in dissipative media there are two additional variables compared to elastic media: the magnitude of the attenuation vector and its angle with respect to the wave-number vector. When these vectors are colinear (homogeneous waves), phase velocity, slowness, and attenuation surfaces are simply derived from the complex velocity, although even in this case many of the elastic properties are lost. The wave fronts, defined by the energy velocities, are obtained from the energy balance equation. The attenuation factors are directly derived from the complex velocities, but the quality factors require the calculation of the potential and loss energy densities, yet resulting in a simple function of the complex velocities. [Work supported by EEC.] | <urn:uuid:f8830aba-f9b1-4c5e-ab19-945fb2bb62ae> | 2 | 1.640625 | 0.04343 | en | 0.893718 | http://www.auditory.org/asamtgs/asa92nwo/1aPA/1aPA6.html |
Friday, September 17, 2010
HDCP Master Key Is Real, But It Won't Do You Much Good [Security]
HDCP Master Key Is Real, But It Won't Do You Much GoodIntel confirmed that the HDCP "master key" posted anonymously last week is indeed real. But while it's always fun to see restrictive security measures get picked apart, this particular crack probably won't do you a whole lot of good.
CNET talked to all types of security folk to get the scoop on the implications of the leaked key, and while Cryptography Research president Paul Kocher says it'll let you "play god for this protocol,"—designed to protect content as it's beamed from set top boxes and Blu-ray players to HDTVs over HDMI—what the key really means is that a few years down the line there could be some hardware boxes that'll be able to create perfect bit for bit digital copies of HDCP-protected movies and broadcasts.
HDCP, short for High-bandwidth Digital Content Protection, is built directly into the chips in TVs and Blu-ray players, an Intel spokesperson explained, and to reap the benefits of the key you'd have to "implement them in silicon...a difficult and costly thing to do." Of course, Intel's still pushing ahead with the technology, which they license to all sorts of hardware manufacturers, so it's in their best interest to downplay the significance of the key making it into the wild.
But for those up to speed in the cryptology world, the appearance of the key is of little surprise. In 2001, researchers at Carnegie Mellon determined that only 39 HDCP-equipped devices would be required to reverse engineer the master key. So it's been something of an inevitability that someone would figure out the "master key"—the idea of a "master key" in any context is pretty enticing—but for now there will still be far easier ways for media pirates to do their pirating. [CNET] | <urn:uuid:b516a503-ae8b-4111-b210-fed3c6e4dbdd> | 2 | 2.15625 | 0.038109 | en | 0.953431 | http://www.augustinefou.com/2010/09/hdcp-master-key-is-real-but-it-wont-do.html |
Answer a question Ask a question
What could it be?
My daughter who is 17m has had a fever since monday, we have been to the doctor and to the ER! She has no other symptoms but the fever! It will go away a little after i give her meds. but as soon as the meds ware off the high temp comes back. The doctors do not know what it is. The ER ran a chest Xray but it came back clear. I feel helpless because she cannot tell me what is wrong and if anything hurts. Can anyone give me some advice as to what might be wrong? The Doctors don't know or they are just idiots!
Posted: 06/13/2013 by a BabyCenter Member
Mom Answers
My daughter did the same thing a few months back. It ended up being a U.T.I. Infections usually present themselves with a fever over 102. My daughters fever was anywhere between 101-103.5.
posted 06/13/2013 by everett9
Was this answer helpful?
0 out of 0 found this helpful
Answer this question
Featured video
Your Pregnancy, Week by Week
Your Pregnancy, Week by Week
Have an account? Log in | <urn:uuid:bb4e25a6-78da-4946-83ac-4035f441407e> | 2 | 1.875 | 0.074752 | en | 0.976989 | http://www.babycenter.com/400_what-could-it-be_14421013_558.bc |
The Basketball Notebook
Monday, December 12, 2005
Basketball Notebook Stats Primer
New to The Basketball Notebook? Read this explanation of the nontraditional numbers and terminology often used on this blog.
In baseball, each team is allowed 27 outs to record as many runs as possible. Basketball's equivalent to the out is the possession. Each team has roughly the same number of possessions as its opponent, so the team that better converts its possessions into points wins the game.
Since the possession is the fundamental unit for analyzing basketball performance, it's important to know exactly what it means. A possession is simply the events that occur from the time one team gains control of the ball until the the time at which the other team takes control. By definition, then, both teams in any given game will have the same number of possessions (give or take one or two, since the team that starts a half with the ball might also end with it).
Example - Connecticut's Marcus Williams steals the ball from Illinois's Dee Brown, then fires a pass to Rashad Anderson in the corner. He doesn't get the shot to go, but Josh Boone corrals the rebound and throws down a two-handed stuff over Brian Randle. UConn's possession begins when Williams steals the ball but does not end until after Boone makes the shot, because that is the point at which Illinois regains control of the ball.
There are several ways for a team's possession to end - it can make a field goal or free throw, it can miss a field goal or free throw that the other team rebounds, or it can just simply turn the ball over. These events are represented by the following formula -
Possessions = FGA - Oreb + 0.475*FTA + TO
Field goal attempts minus offensive rebounds represents all the shots a team either makes or allows its opponent to rebound, and turnovers are self-explanatory. The free throw term can be a little confusing. Not all free throw attempts can end a possession, since some are the first of a pair and some are part of three point plays (where the possession is already counted by the made FG). Reasearch by Ken Pomeroy indicates that about 47.5% of free throws end possessions.
The unit commonly used to measure a team's offensive and defensive performance is points per game. This can be very misleading. What if a slow, walk-it-up team shoots 60% but only scores 60 points because of their deliberate style - would they be a poorer offensive team than one that runs and guns its way to 70 points on 40% shooting? The first team is making better use of its possessions, and its opponent will need a great offense to win, because it only has the same number of possessions with which to score.
To make it easier to compare teams with varying paces, performance is measured as points per possession (PPP). The formula is as simple as it sounds (except that we multiply by 100 to leave ourselves with friendlier numbers) -
(Points / Possessions) x 100
Thus a team's offensive efficiency is expressed as the points it scores per 100 possessions, and its defensive efficiency is the points it allows per 100 possessions. You might think that it would make sense to measure the spread between a team's offensive and defensive efficiencies. One step ahead of you.
Note - I use the terms PPP, offensive efficiency, and offensive rating interchangeably.
Leading basketball analyst Dean Oliver breaks offensive performance into four categories, which he calls his Four Factors - shooting efficiency, turnover rate, offensive rebounding, and free throw conversion. They are listed in the order of their importance.
Shooting Efficiency
Shooting efficiency is measured by a stat called effective shooting percentage (eFG%) or adjusted field goal percentage (adjFG%). Traditional field goal percentage just measures the ratio of made field goals to field goals attempted. This doesn't take into account the added points from three point field goals. To illustrate the point - if J.J. Redick makes 4 out of 10 threes, he scores 12 points. If Shelden Williams makes 5 out of 10 two-point shots, he only scores 10 points, but has a higher FG%. Effective FG% eliminates that bias. The formula -
eFG% = (FG + 0.5 x 3FG) / FGA
Turnover Rate
As you already know, a turnover is a loss of a possession, which lowers offensive efficiency. A team can't score when it gives the ball away before it can shoot. Turnover rate is a simple ratio -
TO% = Turnovers / Possessions
Offensive Rebounding
In the event that a team does miss a shot, it can prolong its possession and give itself an additional chance to score by rebounding its own misses. Please don't perpetuate the myth that team rebounds per game, team offensive rebounds per game, etc, are a worthwhile stat. Use this instead -
Oreb% = Offensive Rebounds / (Offensive Rebounds + Opponent's Defensive Rebounds)
This way, you only measure how many rebounds a team grabs based on what's available. For example, if you use "team offensive rebounds per game," a team that shoots 30% is probably going to grab a lot of offensive rebounds, whether they're a good rebounding team or not. If you use Oreb%, you're looking at a ratio of how many rebounds a team grabbed compared to how many were available.
Free Throw Conversion
The final component of offensive performance has two parts - the ability to get to the free throw line, and the ability to make free throws. However, we want to express this factor with just one number. If you're more concerned with measuring how often a team shoots free throws, use -
If you want to see how well a team shoots at the line in addition to how often they get there, use -
Dean Oliver created a stat he calls "Offensive Rating" to measure an individual player's efficiency at producing points for the offense. The end formula is simple -
Offensive Rating = (Points Produced / Individual Possessions) x 100
However, its components are rather complicated. Points can be produced through field goals, free throws, assists, and offensive rebounds. Individual possessions are the sum of a player's scoring possessions (field goals, free throws, plus partial credit for assists), missed field goals and free throws that the defense rebounds, and turnovers. For details on the calculations, consult Dean's book.
Much like Dean's Four Factors for team offensive production, I like to look at how well a player performs in each area of the offense - shooting, passing, rebounding, and turnovers.
One simple measure of shooting effectiveness is eFG%, which is calculated the same way as described above. I usually prefer to use a variation of John Hollinger's True Shot Percentage (TS%), because it takes free throws into account.
TS% = Points / [ 2 x (FGA + 0.475 x FTA) ]
The fewer the field goal and free throw attempts a player uses to score points, the higher his true shot percentage. Simple enough.
Individual rebound percentage (Reb%) is similar to the team Reb% formula above, except that you must account for playing time. Since all missed shots must be rebounded by the offense or defense (or at least they're credited to one of the teams), we can measure how effective a player is at rebounding by comparing his rebound total to the number of shots missed while he's on the court.
Reb% = Rebounds / [ (Team's rebounds + Opponent's Rebounds) x (Minutes / Team Minutes)]
You mulitply the number of rebounds in the game times the percentage of the game that the player was on the court to arrive at an estimate of how many missed shots were available to a player to rebound. The percentage of those available rebounds that he actually grabs is his rebound percentage.
This is another simple one.
TO% = Turnovers / Individual Possessions
Individual possessions, as noted earlier, are the sum of scoring possessions, missed shots and free throws not rebounded by the player's teammates, and turnovers.
Want More?
I first learned most of this stuff by reading books written by Dean Oliver and John Hollinger, so click on their names if you're interested in further reading. | <urn:uuid:7af5fdea-4418-4b2a-a2cf-0d81cedbf172> | 3 | 2.546875 | 0.293398 | en | 0.938811 | http://www.basketballnotebook.blogspot.com/2005_12_01_archive.html |
[Q] Stochastic Resonance
Joseph Sirosh sirosh at cs.utexas.edu
Sat Oct 15 18:02:27 EST 1994
I have heard the keyword stochastic resonance mentioned often in connection with
networks in the brain. Harry Erwin referred to it in many of his posts. What
exactly is Stochastic Resonance? How is it relevant to brain function? How does
it allow for hyperacuity (mentioned in passing in one of Harr's posts..)?
Very Curious,
Joseph Sirosh email: sirosh at cs.utexas.edu
Dept of Computer Sciences WWW : http://www.cs.utexas.edu/~sirosh
UT Austin, Austin, TX-78712. phone: (512) 451-3623
More information about the Neur-sci mailing list | <urn:uuid:170e5f0a-a284-4ee7-ad12-29f6123f8025> | 2 | 1.820313 | 0.986235 | en | 0.858601 | http://www.bio.net/bionet/mm/neur-sci/1994-October/015195.html |
(714) 408-1-409
HID Conversion Kit, Xenon Lights
HID Xenon Light kits
How to adjust HID headlights
Most car accidents happen at night or at times when there is less light illuminating the roads. It is indeed difficult to maneuver a vehicle when you don’t clearly see the road you are passing through. It can be compared to a 90-year old man trying to read without his glasses on or looking for a needle at the bottom of a river with musky water. Hence, the importance of a vehicle’s headlights. Headlights minimize the risk one takes when driving at night. They make us more aware of the hazard signs that we come across as we drive. It guides us and enables us to get to our destination safely.
But just like any other part of your vehicle, they also need regular maintenance. There are times when they get cloudy providing less light output, or they are not adjusted to an angle that will enable them to illuminate the road best. Therefore, someone who drives must ensure they work properly at all occasions.
Adjusting the headlights is quite a simple task. It does not take a mechanic to get it done. All you need are a few tools. What you will need is a Philips screwdriver, or any that fits the adjusting screws, and a masking tape.
The best time to do it, of course, is at night. And then, follow these simple procedures:
1. Park your vehicle on level ground with the headlights near a garage door or wall.
2. Mark the horizontal centerlines of the lights on the wall using a masking tape. Mark the vertical centerline for each light as well. The marking will tell you where exactly the headlights should be centered.
3. Move the car back about 10 to 25 feet away from the garage door or the wall.
4. Find the adjusting screws of the headlights. It is better to find these screws before turning on the lights. It will allow you to touch the headlights before they warm up. There are horizontal and vertical screws which have small spring behind them. Some cars come equipped with a small level attached to the top of the headlight under the hood. It helps you get an accurate adjustment.
5. Turn on the headlights on the low beam setting. Your high beams should set to the right level as well when you adjust the low beams.
6. Use the markings you made on the wall or garage door, check where the light shines and see if there is any uneven light beam. Check if the beams match.
7. Make the necessary adjustments with the lights still on and while watching the lights beams. Turning the top adjusting screws in a clockwise direction will raise the beam while counterclockwise turn will lower it. Turning the side adjuster screws will adjust the lights to the left or right. Continue the adjustments until the light beams are even. It is recommended that the lights are tilted slightly downward so they won’t blind approaching motorist.
Cars, however, can differ. So, it is best to check first your vehicle’s manual, which would also indicate how often the headlights should be adjusted. It is usually recommended that headlights be adjusted annually or as often as necessary, whenever they are out of alignment.
A few simple tools and few easy procedures and presto you are ready to hit the road. | <urn:uuid:c778dab4-e769-477d-bd64-ce20790af454> | 2 | 2.15625 | 0.023151 | en | 0.932991 | http://www.blindinghid.com/how-to-adjust-hid-headlights/ |
Boost C++ Libraries Home Libraries People FAQ More
Concept Checking
Each of the range concepts has a corresponding concept checking class in the file <boost/range/concepts.hpp>. These classes may be used in conjunction with the Boost Concept Check library to ensure that the type of a template parameter is compatible with a range concept. If not, a meaningful compile time error is generated. Checks are provided for the range concepts related to iterator traversal categories. For example, the following line checks that the type T models the Forward Range concept.
BOOST_CONCEPT_ASSERT(( ForwardRangeConcept<T> ));
An additional concept check is required for the value access property of the range based on the range's iterator type. For example to check for a ForwardReadableRange, the following code is required.
BOOST_CONCEPT_ASSERT(( ForwardRangeConcept<T> ));
BOOST_CONCEPT_ASSERT(( ReadableIteratorConcept<typename range_iterator<T>::type> ));
The following range concept checking classes are provided.
See also
Range Terminology and style guidelines
Iterator concepts
Boost Concept Check library | <urn:uuid:f606a571-4a89-46db-9b4b-71258fc06eb2> | 2 | 2.28125 | 0.116124 | en | 0.720365 | http://www.boost.org/doc/libs/1_48_0/libs/range/doc/html/range/concepts/concept_checking.html |
deliberative democracy, school of thought in political theory that claims that political decisions should be the product of fair and reasonable discussion and debate among citizens.
In deliberation, citizens exchange arguments and consider different claims that are designed to secure the public good. Through this conversation, citizens can come to an agreement about what procedure, action, or policy will best produce the public good. Deliberation is a necessary precondition for the legitimacy of democratic political decisions. Rather than thinking of political decisions as the aggregate of citizens’ preferences, deliberative democracy claims that citizens should arrive at political decisions through reason and the collection of competing arguments and viewpoints. In other words, citizens’ preferences should be shaped by deliberation in advance of decision making, rather than by self-interest. With respect to individual and collective citizen decision making, deliberative democracy shifts the emphasis from the outcome of the decision to the quality of the process.
Deliberation in democratic processes generates outcomes that secure the public or common good through reason rather than through political power. Deliberative democracy is based not on a competition between conflicting interests but on an exchange of information and justifications supporting varying perspectives on the public good. Ultimately, citizens should be swayed by the force of the better argument rather than by private concerns, biases, or views that are not publicly justifiable to their fellow deliberators.
Early influences
Two of the early influences on deliberative democratic theory are the philosophers John Rawls and Jürgen Habermas. Rawls advocated the use of reason in securing the framework for a just political society. For Rawls, reason curtails self-interest to justify the structure of a political society that is fair for all participants in that society and secures equal rights for all members of that society. These conditions secure the possibility for fair citizen participation in the future. Habermas claimed that fair procedures and clear communication can produce legitimate and consensual decisions by citizens. These fair procedures governing the deliberative process are what legitimates the outcomes.
Features of deliberation
Deliberative theorists tend to argue that publicity is a necessary feature of legitimate democratic processes. First, issues within a democracy should be public and should be publicly debated. Second, processes within democratic institutions must be public and subject to public scrutiny. Finally, in addition to being provided with information, citizens need to ensure the use of a public form of reason to ground political decisions, rather than rely on transcendent sources of authority available only to a segment of the citizenry, such as revealed religion. The public nature of the reason used to ground political decisions generates outcomes that are fair and reasonable but subject to revision if warranted by new information or further deliberation.
Some deliberative theorists claim that the deliberative process of exchanging arguments for contrasting viewpoints can and should produce a consensus. Others think that disagreement will remain after the deliberative process is completed but that deliberation can produce legitimate outcomes without consensus. Even when the exchange of reason, arguments, and viewpoints does not seem to produce a clear outcome, many deliberative theorists suggest that the dissent produced, and the continuing debate, enhances the democratic process.
Because the deliberative process requires that citizens understand, formulate, and exchange arguments for their views, norms of clear communication and rules of argumentation are important to formulate. Citizens must be able to present their claims in understandable and meaningful ways to their fellow deliberators. These claims must also be supported by argumentation and reason that makes these views publicly justifiable to differently situated deliberators.
Most theories of deliberative democracy hold that the maximum inclusion of citizens and viewpoints generates the most legitimate and reasonable political outcomes. In addition to improving the level of discussion and accounting for the most arguments, more-inclusive deliberative processes are fairer because more people have their views considered. Whether or not a citizen’s view is present in the outcome, it has at least been figured into the debate by fellow citizen deliberators.
Challenges to deliberative democratic theory
Many theorists consider the following possible problems with theories of deliberative democracy. If only certain modes of expression, forms of argument, and cultural styles are publicly acceptable, then the voices of certain citizens will be excluded. This exclusion will diminish the quality and legitimacy of the outcomes of deliberative processes. Further, deliberation assumes the capacity of citizens to be reasonable, cooperate, unify, and shape their views based on rational debate and the views of others. Some argue that this may be more than human beings are capable of, either because of human nature or because of already existing social inequalities and biases. Social conditions, such as already existing structural inequalities, pluralism, social complexity, the increasing scope of political concerns, and the impracticality of affected citizens having forums in which to deliberate are also reasons why some are skeptical of the viability of a deliberative form of democracy.
Deliberative democratic theory brings ethical concerns into the realm of democratic decision making. The ultimate aim of deliberative democratic practices is increased citizen participation, better outcomes, and a more authentically democratic society.
What made you want to look up deliberative democracy?
(Please limit to 900 characters)
Please select the sections you want to print
Select All
MLA style:
"deliberative democracy". Encyclopædia Britannica. Encyclopædia Britannica Online.
APA style:
deliberative democracy. (2015). In Encyclopædia Britannica. Retrieved from
Harvard style:
deliberative democracy. 2015. Encyclopædia Britannica Online. Retrieved 04 March, 2015, from
Chicago Manual of Style:
Encyclopædia Britannica Online, s. v. "deliberative democracy", accessed March 04, 2015,
Editing Tools:
We welcome suggested improvements to any of our articles.
deliberative democracy
• MLA
• APA
• Harvard
• Chicago
You have successfully emailed this.
Error when sending the email. Try again later.
Or click Continue to submit anonymously: | <urn:uuid:377088bb-5ed0-4d8c-92af-3ea04c535499> | 4 | 3.703125 | 0.023019 | en | 0.919139 | http://www.britannica.com/EBchecked/topic/1918144/deliberative-democracy |
George Boole
George Boole, engraving.Courtesy of the trustees of the British Museum; photograph, J.R. Freeman & Co. Ltd.
George Boole, (born November 2, 1815Lincoln, Lincolnshire, England—died December 8, 1864, Ballintemple, County Cork, Ireland), English mathematician who helped establish modern symbolic logic and whose algebra of logic, now called Boolean algebra, is basic to the design of digital computer circuits.
Boole was given his first lessons in mathematics by his father, a tradesman, who also taught him to make optical instruments. Aside from his father’s help and a few years at local schools, however, Boole was self-taught in mathematics. When his father’s business declined, George had to work to support the family. From the age of 16 he taught in village schools in the West Riding of Yorkshire, and he opened his own school in Lincoln when he was 20. During scant leisure time he read mathematics journals in the Lincoln’s Mechanics Institute. There he also read Isaac Newton’s Principia, Pierre-Simon Laplace’s Traité de mécanique céleste, and Joseph-Louis Lagrange’s Mécanique analytique and began to solve advanced problems in algebra.
Boole submitted a stream of original papers to the new Cambridge Mathematical Journal, beginning in 1839 with his “Researches on the Theory of Analytical Transformations.” These papers were on differential equations and the algebraic problem of linear transformation, emphasizing the concept of invariance. In 1844, in an important paper in the Philosophical Transactions of the Royal Society for which he was awarded the Royal Society’s first gold medal for mathematics, he discussed how methods of algebra and calculus might be combined. Boole soon saw that his algebra could also be applied in logic.
Developing novel ideas on logical method and confident in the symbolic reasoning he had derived from his mathematical investigations, he published in 1847 a pamphlet, “Mathematical Analysis of Logic,” in which he argued persuasively that logic should be allied with mathematics, not philosophy. He won the admiration of the English logician Augustus De Morgan, who published Formal Logic the same year. On the basis of his publications, Boole in 1849 was appointed professor of mathematics at Queen’s College, County Cork, even though he had no university degree. In 1854 he published An Investigation into the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities, which he regarded as a mature statement of his ideas. The next year he married Mary Everest, niece of Sir George Everest, for whom the mountain is named. The Booles had five daughters.
One of the first Englishmen to write on logic, Boole pointed out the analogy between algebraic symbols and those that can represent logical forms and syllogisms, showing how the symbols of quantity can be separated from those of operation. With Boole in 1847 and 1854 began the algebra of logic, or what is now called Boolean algebra. Boole’s original and remarkable general symbolic method of logical inference, fully stated in Laws of Thought (1854), enables one, given any propositions involving any number of terms, to draw conclusions that are logically contained in the premises. He also attempted a general method in probabilities, which would make it possible from the given probabilities of any system of events to determine the consequent probability of any other event logically connected with the given events.
In 1857 Boole was elected a fellow of the Royal Society. The influential Treatise on Differential Equations appeared in 1859 and was followed the next year by its sequel, Treatise on the Calculus of Finite Differences. Used as textbooks for many years, these works embody an elaboration of Boole’s more important discoveries. Boole’s abstruse reasoning has led to applications of which he never dreamed: for example, telephone switching and electronic computers use binary digits and logical elements that rely on Boolean logic for their design and operation. | <urn:uuid:78f162fa-2de3-4202-9d75-10b88eb8c2dc> | 3 | 3.171875 | 0.026998 | en | 0.965071 | http://www.britannica.com/print/topic/73612 |
Back in basic nutrition class, we were introduced to the concept of “first limiting nutrient.” Take a simplified example: if some calves were eating enough protein, minerals, and vitamins to support 3 pounds of daily gain, but only enough energy to support 2 ½, they would gain . . . 2 ½ lb a day. In this case, energy is the first limiting nutrient. My professors illustrated this with the image of a barrel with one stave shorter than the others – it could only be filled to the top of the short stave, limiting the barrel’s effective volume. But, as I think about it, barrels aren’t something most of us routinely encounter. A more up-to-date example might be having a cell phone with 4G capabilities, paying for a plan that supports 4G access, but being somewhere with only 3G service available. So the local service becomes the “first limiting” factor...and someone who uses their phone for more demanding applications than I do doesn’t get the speed they were hoping for.
A conversation at a conference a few weeks ago got me thinking about looking at this concept in a broader context. I was talking with friends who, like me, are professionally focused on helping cattle producers make their operations as profitable and sustainable as possible. We each work for companies that can provide products or programs that address specific needs or challenges, and we are all confident that we represent valid opportunities to enhance our customers’ businesses. But we’ve all found ourselves in situations where we didn’t – or where we knew we couldn’t – provide the kind of ‘bang for the buck’ being looked for. That’s because there was a “first limiting” management practice on the farm or ranch that needed to be addressed first.
I guess this follows the same line of thinking as ‘holistic management,’ but without the fancy name. What it boils down to is recognizing that not only are all areas of cattle production important, they are so interdependent that we can’t manage (or ignore) one without considering the relationships to all the others.
Herd Health
When cattle are sick, it probably doesn’t matter which efficiency enhancer we add to the diet, or how great their sire’s EPDs were; they still aren’t going to perform at a high level. Infections can place multiple roadblocks in the way of growth and reproduction:
• Energy and other resources are re-directed towards fighting disease organisms and mounting a multi-pronged immune response;
• Some diseases lead to specific losses such as abortions;
• Sick animals often reduce their feed intake;
• Absorption of nutrients from the feed that is eaten may be impaired.
It’s easy to see that an inadequate preventive health program can stand in the way of expected performance. A veterinarian who is familiar with the operation and local conditions is the best resource for developing a vaccination and biosecurity protocol that helps ensure that overall productivity isn’t limited by disease challenges.
Like infectious diseases, both internal and external parasites can sap valuable nutrients away from productive purposes. Heavily infested cattle eat less, absorb fewer nutrients, must fuel stress-related and immune responses, and have to repair damaged tissues and replace lost blood. If all that is going on, it is unlikely that adoption of any unrelated management practice is going to completely overcome the limits placed by the presence of high numbers of flies, worms, or other damaging pests.
This summer’s drought has brought the potential impact of environmental conditions home to many of the nation’s cattle producers. Heat, cold, mud, wind, and limited water quality and quantity can all add stress, increase maintenance requirements, and discourage feed consumption. Man-made components of the animals’ surroundings can both relieve or compound these problems. While much harder to control than diseases and parasites, environment can certainly become the factor that places the ceiling on potential performance. When that is the case, focusing on options like mounds, shade, misters, improved access to water, or windbreaks will be the best investment the operation can make.
When everything comes together, and cattle are healthy, parasite-free, and receiving a balanced diet in a low-stress environment, they can still only perform to their genetic potential. If the goal is for even faster gains, heavier weights, or more efficiency (and the resources are there to support this level of production), the right cattle need to be in place to accomplish that. On the flip side, large rapidly-growing animals need more inputs than their more moderate counterparts, and unless those can be provided economically, that genetic base is a poor fit. Unless the cattle fit the environment and the performance goals of the operation, the herd’s genetic makeup is going to be the major limitation towards desired progress
I began this discussion with a review of the ‘first limiting nutrient’ within a nutritional program. But the overall diet being offered to cattle can also be the definitive constraint on potential herd improvements. Cattle receiving inadequate nutrition can exhibit reduced response to vaccines, be unable to mount adequate immune responses, will not exhibit their full genetic potential for growth or reproduction, are unlikely to fully benefit from products like ionophores, and cannot effectively combat environmental stress. A balanced diet, potentially enhanced with any of a range of effective feed additives, is the foundation of a successful and profitable cattle enterprise.
All these areas are critically important in raising cattle. And adopting tools and practices that enhance any of them can be a good investment. But to get the best return on that investment – and to see the full expected response to that change – we need to be sure there isn’t another “first limiting” management practice that needs to be addressed first. | <urn:uuid:1a47d7cb-36cf-45b5-9312-8da3bf438018> | 2 | 1.78125 | 0.031996 | en | 0.936948 | http://www.cattlenetwork.com/cattle-news/Whats-holding-you-back-169683306.html |
LightSquared, GPS industry square off on Capitol Hill
Fri, 06/24/2011 - 8:05am
Maisie Ramsay, Wireless Week
LightSquared defended its plans to build an LTE network in spectrum near GPS bands at a congressional hearing this morning, where some expressed skepticism about whether the company should be allowed to launch its mobile broadband service in the L-band.
Top officials at the Department of Transportation, Defense Department and Coast Guard, as well as executives from the Radio Technical Commission for Aeronautics (RCTA), Garmin, the Aircraft Owners and Pilots Association (AOPA) and the Air Transport Association (ATA) testified that LightSquared's original network plan would have catastrophic effects on GPS systems, and said it remained unclear whether the company's revised plan would resolve the issue.
Margaret Jenny, president of the RTCA, said testing showed LightSquared's network would be compatible with GPS systems if it only operated in the 5 MHz of Inmarsat's spectrum sitting farthest away from the GPS band. However, LightSquared's revised plan calls for the company to use the lower 10 MHz of Inmarsat's spectrum, which could knock out some high-precision GPS systems, such as those used to land airplanes and conduct rescue missions.
"The lower 10 MHz need more study," Jenny said.
Some speaking at the hearing called for LightSquared to move its network to spectrum outside the L-band or give up their plans altogether.
"Like the FDA, I think they [the FCC] need to issue a recall," AOPA President Craig Fuller said, comparing LightSquared's waiver to operate a mobile broadband service to a harmful drug that erroneously passed the FDA's screening process. "This is simply a toxic drug. This will not work in the system we have today."
Phil Straub, vice president of aviation engineering at Garmin International, echoed Fuller's remarks, calling for the FCC to rescind LightSquared's waiver and move the company's service out of the L-band.
"Please do everyone a service and put an end to this dysfunctional exercise," Straub said.
Some of the lawmakers questioning witnesses at the hearing also expressed doubts as to whether LightSquared should be allowed to move forward.
Missouri Republican Congressman Sam Graves said he was "terribly concerned" about the effect LightSquared's network could have on GPS systems used in the aviation and agriculture industries.
"I'll be honest with you, I'm not comfortable with it whatsoever, I'm not supportive whatsoever," Graves said.
LightSquared spokesman Jeff Carlisle defended the company's plans, pointing out that the GPS industry had only recently voiced concerns about interference despite the fact that plans to roll out a mobile broadband service in the L-band had been years in the making.
Carlisle also stated that LightSquared was confident its plan to use Inmarsat's spectrum would resolve the interference issue for 99 percent of the estimated 500 million GPS receivers currently being used in the United States.
"Our operation in the lower part of the band does not cause interference for the vast majority of GPS receivers," Carlisle said, but admitted that "further work needs to be done" to determine whether GPS used in aviation would still be affected under the company's revised plan.
Carlisle and witnesses speaking on behalf of the GPS industry clashed over whether filters could be used to resolve the interference issue. Carlisle stated that the filters were in development and would cost just pennies to install on some devices, while Straub and Fuller questioned the existence of the technology and the practicality of outfitting millions of GPS receivers with filters.
Republican Congressman Tom Petri said that the House Aviation Subcommittee, which he chairs, may ask the FCC for time to "allow full, comprehensive study of the plans" and called for independent testing of LightSquared's revised network build.
Roy Kienitz, under secretary for policy at the Transportation Department, said that more testing would be needed for the agency to assess whether LightSquared's revised network would affect GPS if it only operated in the lower 10 MHz of Inmarsat's spectrum.
"The Department of Transportation would like to work towards a 'win-win' – if one exists - that allows for increased broadband access, without disrupting existing and planned GPS-based services, such as NextGen," he said. "Any alternative must be robustly tested, as was the original plan."
Share This Story
The password field is case sensitive. | <urn:uuid:64a522a1-a60e-4e3b-83f5-d053605729be> | 2 | 1.59375 | 0.019169 | en | 0.956579 | http://www.cedmagazine.com/news/2011/06/lightsquared%2C-gps-industry-square-off-on-capitol-hill?qt-recent_blogs=0&qt-multimedia_module=1 |
Understanding Employee Benefits
When you graduate from Champlain, you'll be applying for and, hopefully, getting a job. When you get that job, will you really know how to understand the job offer that you're getting and make the most out of your benefits package? Why leave money on the table by not knowing what an employer is really offering you?
This workshop will help you understand employee benefits and how to get the most out of them. You will leave with an understanding of insurance and retirement options, the real dollar value of benefits and how to know what the best compensation packages really are.
It's not as simple as you might think. The more you know, the more you can advocate for yourself and get a package that really works for you!
Taught by: John Pelletier, Center for Financial Literacy at Champlain College | <urn:uuid:088a39b9-a9fe-433b-923b-535c3e373946> | 2 | 1.671875 | 0.044059 | en | 0.967533 | http://www.champlain.edu/centers-of-excellence/center-for-financial-literacy/cfl-programs/financial-education-for-champlain-students/financial-sophistication/understanding-employee-benefits |
Brain Stem Gliomas in Childhood
Paul Graham FIsher, M.D., M.H.S. and Michelle Monje, M.D., Ph.D
Brain stem tumors are perhaps the most dreaded cancers in pediatric oncology, owing to their historically poor prognosis, yet they remain an area of intense research. Brain stem tumors account for about 10 to 15% of childhood brain tumors. Peak incidence for these tumors occurs around age 6 to 9 years. The term brain stem glioma is often used interchangeably with brain stem tumor. More precisely, glioma encompasses tumor pathology types such as ganglioglioma, pilcytic astrocytoma, diffuse astrocytoma, anaplastic astrocytoma, and glioblastoma multiforme.
Rarely, other tumor pathologies such as atypical teratoid/rhabdoid tumor (ATRT), primitive neuroectodermal tumor (PNET)/embryonal tumor, and hemangioblastoma occur at the brain stem. These entities are quite different from brain stem gliomas, and the following comments do not apply.
Classification: Brain stem gliomas have been grouped in the past according to their pathology and location within the brain stem. Terms found in the medical literature include diffuse intrinsic gliomas, midbrain tumors, tectal gliomas, pencil gliomas, dorsal exophytic brain stem tumors, cervicomedullary tumors, focal gliomas, and cystic tumors. A simpler way to classify these tumors is by two categories: diffuse intrinsic pontine glioma and focal brain stem glioma.
Symptoms: Children with DIPG present with ataxia (clumsiness or wobbliness), weakness of a leg and/or arm, double vision, and sometimes headaches, vomiting, tilting of the head, or facial weakness. Double vision (diplopia) is the most common presenting symptom for these tumors. Symptoms are usually present for 6 months or less at time of diagnosis. Patients with focal brain stem gliomas may display some of the same symptoms, although not the usual combination of ataxia, weakness, and double vision. Duration of symptoms is often greater than 6 months before the focal brain stem tumor is diagnosed.
Diagnosis: Throughout the United States, brain magnetic resonance imaging (MRI), with and without gadolinium contrast, remains the "gold standard" for diagnosis of brain stem gliomas. Biopsy is seldom performed outside specialized biomedical research protocols for DIPG, unless the diagnosis of this tumor is in doubt. Biopsy may be indicated for brain stem tumors that are focal or atypical, especially when the tumor is progressive or when surgical excision may be possible.
Diffuse intrinsic pontine gliomas (DIPG) insinuate diffusely throughout the normal structures of the pons (the middle portion of the brain stem), sometimes spreading to the midbrain (the upper portion of the brain stem) or the medulla (the bottom portion of the brain stem). The term diffuse intrinsic glioma is synonymous. By pathology, these tumors are most often a diffuse (sometimes referred to as fibrillary) astrocytoma (World Health Organization [WHO] grade II) or its higher-grade counterparts, anaplastic astrocytoma (WHO III) and glioblastoma multiforme (WHO IV). Very rarely these tumors start in the medulla or midbrain.
Focal brain stem gliomas--perhaps 20% or more of brain stem gliomas--include tumors that are more circumscribed, focal, or contained at the brain stem. These tumors may have cysts or grow out from the brain stem (i.e., exophytic). These tumors more often arise in the midbrain or medulla, rather than the pons. Pathology for these tumors is frequently pilocytic astrocytoma (WHO I) or ganglioglioma (WHO I), although rarely diffuse astrocytoma (WHO II).
Since brain stem gliomas are relatively uncommon and require complex management, children with such tumors deserve evaluation in a comprehensive cancer center where the coordinated services of dedicated pediatric neurosurgeons, child neurologists, pediatric oncologists, radiation oncologists, neuropathologists, and neuroradiologists are available. In particular, for DIPG, because of its rarity and poor prognosis, children and their families should be encouraged to participate in clinical trials attempting to improve survival with innovative therapy.
Neurosurgery Surgery to attempt tumor removal is usually not possible or advisable for DIPG. By their very nature, these tumors invade diffusely throughout the brain stem, growing between normal nerve cells. Aggressive surgery would cause severe damage to neural structures vital for arm and leg movement, eye movement, swallowing, breathing, and even consciousness.
Surgery with less than total removal can be performed for many focal brain stem gliomas. Such surgery often results in quality long-term survival, without administering chemotherapy or radiotherapy immediately after surgery, even when a child has residual tumor. Surgery is particularly useful for tumors that grow out (exophytic) from the brain stem.
Focal brain stem tumors that arise at the top back of the midbrain (tectal gliomas) should be managed conservatively, without surgical removal. Nevertheless, shunt placement or ventriculostomy for hydrocephalus (see below) is frequently necessary. These tumors have been described to be stable for many years or decades without any intervention other than shunting.
Radiotherapy: Conventional radiotherapy, limited to the involved area of tumor, is the mainstay of treatment for DIPG. A total radiation dosage ranging from 5400 to 6000 cGy, administered in daily fractions of 150 to 200 cGy over 6 weeks, is standard. Hyperfractionated (twice-daily) radiotherapy was used previously to deliver higher irradiation dosages, but such did not lead to improved survival. Radiosurgery (e.g., gamma knife, Cyberknife) has no role in the treatment of DIPG.
Chemotherapy and other drug therapies: The role of chemotherapy in DIPG remains unclear. Studies to date with chemotherapy have shown little improvement in survival, although efforts (see below) through the Children's Oncology Group (COG), Pediatric Brain Tumor Consortium (PBTC), and others are underway to explore further the use of chemotherapy and other drugs. Drugs utilized to increase the effect of radiotherapy (radiosensitizers) have thus far shown no added benefit, but promising new agents are under investigation. Immunotherapy with beta-interferon and other drugs to modify biologic response have shown disappointing results. Intensive or high-dose chemotherapy with autologous bone marrow transplant or peripheral blood stem cell rescue has not demonstrated any effectiveness in brain stem gliomas and is not recommended. Future clinical trials may incorporate medicines to interfere with cellular pathways (signal transfer inhibitors) or other approaches that alter the tumor or its environment. For more information and a listing of the most up-to date trials, the reader is encouraged to check the websites of the National Institutes of Health clinical trials registry (, the National Childhood Cancer Foundation/COG (, and the PBTC (
In focal brain stem gliomas, chemotherapy, such as carboplatin/vincristine, procarbazine/CCNU/vincristine, or temozolomide, may be useful in children whose tumors are progressive and not surgically accessible. In children younger than age 3 years, chemotherapy may be preferable to radiotherapy because of the effects of irradiation on the developing brain.
Recurrent or Progressive Brain Stem Gliomas: Regrettably, DIPG has a high rate of recurrence or progression. At relapse, a variety of Phase I and Phase II drug trials are available through the national research consortiums COG and PBTC, as well as through individual pediatric institutions. Oral etoposide, temozolomide, and cyclophosphamide are drug options sometimes utilized outside a study.
Prognosis: DIPG often follows an inexorable course of progression, despite therapy. A large majority of children die within a year of diagnosis. Focal brain stem glioma, however, can carry an exceptional prognosis, with long-term survivals frequently reported.
Other Management Issues: Shunts: Less than half of children with brain stem tumors will develop obstructive hydrocephalus, requiring a shunt or ventriculostomy, at some time during the course of their illness. Shunts are simple mechanical tubing devices that divert cerebrospinal fluid trapped in the brain's ventricles above the tumor to another location in the body, typically the abdomen (peritoneum), as in a ventriculoperitoneal shunt. A ventriculostomy is the surgical creation of an internal channel, often from the third ventricle to a lower portion of the brain, to allow cerebrospinal fluid to drain beyond the tumor.
Steroids: Dexamethasone (brand name Decadron) is a steroid drug frequently administered to brain stem tumor patients for the swelling and "tightness" of their tumor at the base of their skull. Dexamethasone must be used sparingly! Dexamethasone should never be prescribed prophylactically or "just in case." That is, this steroid is an extremely effective medicine for symptomatic swelling associated with treatment of a brain stem glioma, particularly with radiotherapy. However, dexamethasone is not necessary unless a child has symptomatic swelling. Dexamethasone has a number of side effects which include mood changes, insomnia, weight gain, fluid retention, glucose instability, high blood pressure, and increased susceptibility to infection.
Fisher PG, Breiter SN, Carson BS, Wharam MD, Williams JA, Weingart JD, Foer DR, Goldthwaite PT, Burger PC. A clinicopathologic reappraisal of brainstem tumor classification: identification of pilocytic astrocytoma and fibrillary astrocytoma as distinct entities. Cancer 89:1569-1576, 2000.
Donaldson SS, Laningham F, Fisher PG. Advances toward an understanding of brain stem gliomas. J Clin Oncol 24:1266-1272, 2006.
Paul Graham Fisher, M.D., MHS, is Professor of Neurology and Pediatrics, and The Beirne Family Professor of Neuro-Oncology, at the Stanford University School of Medicine and Lucile Salter Packard Children's Hospital. Dr. Fisher voluntarily serves as a member of the Childhood Brain Tumor Foundation's Scientific/ Medical Advisory.
Michelle Monje, M.D., Ph.D., is Instructor of Neurology at Stanford University School of Medicine and Lucile Packard Children’s Hospital. Dr. Monje’s research focuses on the biology of brain stem gliomas and neural stem cell biology.
Revised 4/10/10
Thanks to Our Sponsors | <urn:uuid:89ea6bd3-8435-42aa-a3db-3a3ec8283c2b> | 4 | 3.5 | 0.018435 | en | 0.888806 | http://www.childhoodbraintumor.org/medical-information/brain-tumor-types-and-imaging/item/81-brain-stem-gliomas-in-childhood |
Reviews by the Andreolas
A review by Karen Andreola:
Christian Liberty Nature Readers
When I purchase books for our home school, they don't always end up to be books that have glossy, full-color pictures. Sometimes they are the more humble-looking ones. But very often, humble though they may appear, they possess a writing style capable of drawing my younger students into reading their assignments with satisfaction and enjoyment. Nature Readers are just this sort of book.
For example, Book 3 (average third grade reading level), a fisherman tells stories about Old King Crab, Mr. Barnacle and his son, and the flowers of the sea, which are really animals. At the end of one chapter, describing the business of ants, the author writes, "This seems like a fairy take, but it is quite true. All these things can be seen if you look out for them." Some questions in these readers give the student the opportunity for little tellings (narration). These are the type of prompts: Describe a barnacle fishing party. What can you tell me about giant beetles? Why is it so hard to pull a worm out of his hole? How do ants treat each other while they are at work? Book 4 has more creatures. The author shares surprising observations of baby hummingbird activity. Book 5 describes the basic parts of the human body, as well as useful and varied parts of the bodies of animals. Approx. 200 pages each, softcovers from Christian Liberty Press.
Back to Product Detail Page | <urn:uuid:5155dad3-92f7-4320-9bac-dd249fe5968d> | 2 | 2.25 | 0.374486 | en | 0.950601 | http://www.christianbook.com/Christian/Books/dpep/review.pl?sku=25732&event=CFN |
Lost Highway, by Richard Currey
By Richard Currey, Lost Highway. (Houghton Mifflin, 258 pp.)
How country music star Sapper Reeves lost his way and was found again is the theme of Richard Currey's second novel. It is a story as haunting and beautifully crafted as an old ballad. Spanning the years from World War II to Vietnam, Lost Highway vividly depicts the toll that life on the road takes on musicians: the ramshackle road houses where patrons brawl; restless sleep in a car's front seat; simmering, alcohol-fueled tensions between band members; unpaid bills from the gas company and the grocer; a promoter's unkept promises; the distance in a far-away spouse's voice; the disappointment when the music doesn't get the hearing it deserves.
This article is available to subscribers only.
| <urn:uuid:74c0bc54-da83-4f38-a067-932590bafbc1> | 2 | 1.515625 | 0.489464 | en | 0.901443 | http://www.christiancentury.org/reviews/2012-03/lost-highway-richard-currey |
Tennessee - Income
According to the Bureau of Economic Analysis, in 2001, Tennessee had a per capita personal income (PCPI) of $26,808 which ranked 36th in the United States (including the District of Columbia) and was 88% of the national average, $30,413. The 2001 PCPI reflected an increase of 2.0% from 2000 compared to the national change of 2.2%. In 2001, Tennessee had a total personal income (TPI) of $154,129,629,000 which ranked 20th in the United States and accounted for 1.8% of the national total. The 2001 TPI reflected an increase of 2.8% from 2000 compared to the national change of 3.3%.
Earnings of persons employed in Tennessee increased from $110,654,536,000 in 2000 to $112,771,356,000 in 2001, an increase of 1.9%. The largest industries in 2001 were services,29.2% of earnings; durable goods manufacturing, 10.7%; and retail trade, 10.4%. Of the industries that accounted for at least 5% of earnings in 2001, the slowest growing from 2000 to 2001 was durable goods manufacturing, which decreased 6.9%; the fastest was state and local government (10.3% of earnings in 2001), which increased 5.9%.
According to data released by the US Census Bureau, in 2000, the median household income was $33,885 compared to the national average of $42,148. In 2001, the median income for a family of four was $56,052 compared to the national average of $63,278. For the period 1999 to 2001, the average poverty rate was 13.2% which placed it 40th among the 50 states and the District of Columbia ranked lowest to highest.
Jump to a detailed profile or search
site with Google Custom Search
All US cities
Tennessee bigger cities, Tennessee smaller cities, Tennessee small cities
Tennessee detailed state guide | <urn:uuid:50dda8bc-3bcf-4cd8-ad16-980d48212043> | 2 | 2.234375 | 0.020808 | en | 0.956845 | http://www.city-data.com/states/Tennessee-Income.html |
Democrats aim to close women's wage gap with a tougher fairness-pay bill
Plain Dealer wire services By Plain Dealer wire services The Plain Dealer
on May 24, 2012 at 9:51 PM, updated May 24, 2012 at 10:02 PM
WASHINGTON -- Two pharmaceutical reps shared the same job description and sales quota. They called on the same clients and split their commission 50/50. But the man earned a base salary 60 percent higher than his female partner.
When Fort Lauderdale, Fla., attorney Karen Coolman Amlong sued her client's employer, the answer she received shocked her.
"We have to pay him more or else the competition will hire him away," the employer told Amlong.
"They think the woman is going to get married and have children," Amlong said. "They assume men will stay in the work force. Because they're valued more highly, they're paid more."
Paying men and women different salaries simply because of gender is illegal in the United States, but proving it is so difficult and sometimes so risky for women that widespread inequities remain nearly 50 years after the Equal Pay Act was passed to close the pay gap.
Democrats in Congress are expected to take a tougher pay-fairness bill to a vote in the next few weeks. It's unlikely to receive any Republican support.
"The misnamed Paycheck Fairness Act may help trial lawyers, but it doesn't do one thing to help create jobs for women or improve anyone's wages," said Rep. Tom Rooney, a Florida Republican.
"The end result of this bill would be more lawsuits, fewer jobs and lower wages for everyone."
Supporters of the legislation do not deny that part of the intent is to make it easier to file a lawsuit. They believe such lawsuits, or the threat of them, are good incentives for employers to ensure fairness.
The bill also would prohibit businesses from retaliating against employees who reveal pay information. It is legal for an employer to fire someone who has shared confidential salary data.
"Most of the Equal Pay Act cases that I've handled have been cases in which people have learned by accident" that they are being paid less, Amlong said. "It's certainly not because employers are going to let that information out."
American women earn 77 cents for every dollar men earn, 2010 census figures reveal.
'''Entering professions known to command high salaries does not begin to level the salary schedule, either.'
A White House task force on the pay gap revealed that women are not only paid less for equal work, but are often pigeonholed in lower-paying positions or denied access to jobs traditionally held by men.
Paying women less has long-term consequences, critics of the pay gap say. Many retirement plans, such as 401(k)s, allow enrollees to invest a percentage of their income. If their male colleagues are earning higher salaries, the men are automatically given an advantage in retirement earnings, too.
"The wage gap is probably one reason that women go into retirement with lower savings than men," said Emily Martin, general counsel for the National Women's Law Center.
Underpaying women hurts families, supporters of the bill say. In 2008, nearly 40 percent of mothers were family breadwinners, according to the Center for American Progress.
The bill has its critics, even among women.
Being too quick to try to fix the pay gap may end up destroying some of the culture that is working for women, said Sabrina Schaeffer, executive director of the Independent Women's Forum, a conservative nonprofit that studies policy issues.
"The differences in pay between men and women come down to choices," Schaeffer wrote in a May 4 U.S. News and World Report online piece opposing the Paycheck Fairness Act. "More women than men choose to take time off to raise a family, but that's a far cry from discrimination. And costs are the result of a woman's freedom, not an injustice imposed on her by society."
The bill would limit reasons employers can give to pay women less than men. Schaeffer wrote that she worries that will make employers less flexible. | <urn:uuid:3076ea95-d348-4e6e-ba22-7295cf39e23c> | 2 | 1.789063 | 0.031772 | en | 0.975704 | http://www.cleveland.com/nation/index.ssf/2012/05/democrats_aim_to_close_womens.html |
The Call of the Wild By Jack London Summary and Analysis Chapter 2 - The Law of Club and Fang
This chapter introduces London's second, or parallel, theme of the novel. As a matter of historical and scientific information, the late nineteenth century had seen the emergence of Charles Darwin's theory of evolution, a theory which had become, by the time of London's novel, one of the most controversial scientific theories ever advocated. In a nutshell, the essence of Darwin's theory concerns the evolution of mankind — that is, was Man born as he is today? Or is he the end result of a series of evolutions from a more primitive species of life? In other words, in a more popular conception, is Man descended from apelike creatures? This theory, then, is further emphasized by London's use of the "survival of the fittest" (which also carries the opposite connotation of the elimination of the weakest). This chapter introduces Buck into the concepts of the survival of the fittest, and we will see how Buck is able to confront new and different situations, and how he is able to maintain his mastery of life — even in the most adverse conditions. In fact, at the very beginning of the chapter, London emphasizes this contrast: during Buck's first day, London tells us, "every hour was filled with shock and surprise. Buck had been suddenly jerked from the heart of civilization and flung into the heart of things primordial. No lazy, sun-kissed life was this, with nothing to do but loaf and be bored." In fact, Buck learned the law of the club rapidly in the previous chapter; now he will learn the "law of the fang." London is emphasizing that the respected laws of civilization have to be discarded if a man or a beast is to survive in this primitive situation. Buck learns immediately that he must be "constantly alert, for these dogs and men were not town dogs and men; they were savages." In this new society, Buck intuitively recognizes that only the strongest will survive. This is illustrated by the death of the good-natured dog called Curly, who, once he is wounded and down, is surrounded by thirty or forty other dogs, anxiously waiting to close in upon Curly, waiting for the primitive kill. What Buck witnesses is so unexpected and horrible that he is stunned by the entire episode, and, in fact, as he sees Curly's limp and lifeless body lying in the bloodied snow, he realizes that there is "no fair play" in this world, and that "once down, that was the end of you." In Buck's later life, he will often remember this gory, unjust scene; it will "trouble his sleep" many times. (We can thus anticipate that Buck's memory of this scene will cause him to hold his ground in later dog fights and to be savagely alert and bold.)
When Buck is harnessed to a sled by François, he is placed between Spitz, the lead dog, and Dave, "an experienced wheeler." (A "wheeler" is the dog nearest the sled.) At first, Buck resents being placed in a harness, as though he were merely some "draft animal" that he remembers from civilization, but Buck is too wise to rebel against this treatment, because he knows that François is "stern in demanding obedience, and Buck [knows] that he would not hesitate to use the whip." For the code of the Far North, the whip is tantamount to what the club was in Buck's first lesson concerning the "law of the club." Buck learns his duties very quickly, and one of the important laws of the primitive world is that one must learn quickly if one is to adapt to new situations and survive. For example, after his first day as a sled dog, Buck learns to "stop at 'ho,' and to go ahead at 'mush.'"
Buck's next learning experience involves the three new dogs that Perrault acquires. Two of these dogs, Billee and Joe, are huskies and brothers, but they are quite different in temperament. The third dog, however, Sol-leks (meaning "the angry one"), is blind in one eye, and he does not like to be approached on his blind side. Once, when Buck forgetfully approaches Sol-leks from the blind side, Sol-leks hurls himself upon Buck and slashes Buck's shoulder to the bone. Forever afterward, Buck avoids Sol-lek's blind side. Thus, continually, Buck learns an entirely new way of living and existing. Yet he and Sol-leks are not enemies because of the episode mentioned above, and until the death of Sol-leks, he and Buck are good friends.
Buck's next lesson in adapting to his new life involves finding a warm place to sleep. He sees lights one night in François and Perrault's tent, and because he has been used to sleeping by the Judge's fireplace, Buck enters their tent, only to be bombarded by curses and flying objects. Wandering around the camp site in the cold bitter wind, that is penetrating his wounded shoulder, Buck is surprised to find that all of the other dogs are, as it were, "teammates," and that they have buried themselves under the snow. Thus, Buck learns how the other dogs sleep and keep warm, so he selects a place for himself and is soon asleep; once again, he learns another lesson about how to survive in this new and hostile country.
Next morning, when Buck awakens, he feels the weight of the night's snow pressing down upon him, and "a great surge of fear swept through him — the fear of the wild thing for the trap." London, quite pointedly, goes on to say that this fear was "a token that [Buck] was harking back through his own life to the lives of his forebears." London writes, "the muscles of [Buck's] whole body contracted spasmodically and instinctively," and bursting out through the layer of snow, he sees the camp spread out before him. That day, Buck has another experience learning to be a sled dog, similar to the incident referred to earlier in these Notes. Buck is now placed between Dave and Sol-leks, who are both experienced dogs and who will teach Buck how to perform. When Buck makes a mistake, both dogs instantly "administer a sound trouncing to him." Buck learns very quickly, and at the end of that day, he is exhausted; after digging his hole in the snow, he falls quickly asleep.
For days, Buck is constantly "in the traces," and even though he is given a half pound of food a day more than the other dogs, he never seems to have enough, and he suffers from perpetual hunger pains. This is due partly to the fact that Buck is a civilized dog and a fastidious eater, and the other dogs wolf down their food, then come over and steal Buck's rations. Buck quickly learns, however, that in order to survive, he too must wolf down his food. In a civilized society, Buck would never have had to steal food, but now he realizes that in order to survive and thrive in this hostile northern environment, he will have to learn to steal in very secret and clever ways. According to London, Buck's thefts of food "marked the decay or going to pieces of his moral nature." But what Buck is learning is that in such a wilderness as this, his old sense of morality is a hindrance to survival.
Buck, however, reasons that in order to survive, he must adjust — in every way he can. It was one thing to respect private property in the Southland, where the law of love and fellowship reigned, but here in the Northland, "under the law of club and fang, it was foolhardy to observe any law that did not contribute to one's own personal survival." London writes that, although Buck did not exactly figure this out in "thoughts," the man in the red sweater had taught him about this very fundamental and primitive code. Buck's "decivilization was now almost complete because he did not steal out of joy," and "he did not rob openly, but, instead, he stole secretly and cunningly out of respect for club and fang."
Continuing with this concept of the survival of the fittest, Buck also soon learns that he can eat any type of food (even loathsome food) so long as doing so will help him survive. Furthermore, Buck's sight, his scent, and his hearing quickly develop a keenness which he never knew in civilized society. He is now even able to scent the wind, and he can tell what the weather will be like a night in advance. "And not only did he learn by experience, but instincts long dead had become alive again."
Carrying through with London's concept of naturalism (that maintains that there is a dimension of the primitive in all of us), Buck is beginning to remember back to ancient times before his own existence, to a time "when wild dogs ranged in packs through the primeval forest and killed their meat as they ran it down." Furthermore, on cold nights, Buck often points his nose toward the sky and howls like a wolf; "it was as though his ancestors . . . [were] pointing their noses at the stars and howling down the centuries and through him." This anticipates the final chapter of the novel when Buck will be seen roaming the forest with the wolf pack and will be seen answering the call of the wild by howling with the other wolves.
Back to Top
Take the Quiz
After Spitz's death, which dog becomes the lead dog? | <urn:uuid:5f74e5aa-36ec-42be-a068-88136e77d1dc> | 3 | 3.109375 | 0.056534 | en | 0.979431 | http://www.cliffsnotes.com/literature/c/the-call-of-the-wild/summary-and-analysis/chapter-2 |
undergrad senior project idea, help
Robert G. Brown rgb at phy.duke.edu
Fri Sep 6 09:05:40 EDT 2002
> My hysterically outdated cluster
> (http://superid.virtualave.net/beowulf.jpg) was a collection of 33 and
> 66 mhz 486'en :)
> So for demonstration purposes it might actually be more interesting to
> find systems that are being thrown away so that you can "rescue" them!
It is lovely to recycle old boxen for learning/demo purposes, but it is
also important to be aware of a couple of gotcha's. One is that it gets
harder and harder to run modern kernels on really old boxes, as they
tend to need a fair chunk of resources to run at all. A second one is
related to a mix of Moore's Law and the cost of electricity.
Old boxes or new -- they tend to burn somewhere between 60 and 100 watts
(presuming we're not talking about bleeding edge duals, and depending on
just how loaded they are. 100 Watts running 24x7 for a year costs $70
at $0.08/KW-hour. Running them inside a building (where we have to
remove the heat) is likely to add somewhere between 1/6 (if one can use
the heat during part of the year, as in a home during the winter) and
1/2 this cost, plus the space they occupy is a cost, plus the
maintenance and admin is a cost (which we'd better neglect or it would
REALLY skew this argument:-) -- call non-labor cost of operation
$100/year just to make the arithmetic easy -- a dollar a watt in round
Now, let's assume that a 486 running at 66 MHz can on a good day execute
one instruction per cycle, without worrying too much about what an
"instruction" is. Maybe a float, maybe an int. Let's assume also that
the ones we are using are only burning 50 W (and so cost only $50/year
to operate). Thus our 486 can run at "66 (bogo)MIPS".
A current 2 GHz P4 system (in addition to coming with more disk and
memory, and supporting a far faster network) costs (say) $700 up front
and runs roughly 2000 (bogo)MIPS, or 30x as much. It draws about 100W
(to be generous) and hence costs about $100/year to operate.
Hmmm. A 30 node 486 cluster has about the same aggregate bogoMIPS as a
single P4. It costs $1500/year to operate in electricity and cooling
and shelf/floor space. Even allowing for the cost of buying the P4, it
is twice as cheap and we haven't even discussed Amdahl's Law with NICs
on the old 486 ISA bus yet...
To me it isn't at all clear that recycling old computers this way is
really "green" -- good for the environment in aggregate. Yes, you find
a home for many computers that might otherwise make it to the landfill
(or might better be properly recycled to recover their toxic metals).
OTOH, you burn a lot more free energy to get anything done. This latter
argument is even stronger if you compare the energy costs of tower 486's
to the energy costs of a laptop, which might burn only 25W even running
at > GHz speeds. One of the motivations cited for Transmeta/Blade
computers -- they don't run the highest possible clock, but they are
VASTLY cooler and cheaper to operate than my stack of dual Athlons...;-)
So, for fun, 486's are fine if you can afford to feed them. As a
learning exercise (in perhaps a school), they are also just lovely,
especially if you can foist/hide their real cost of operation in the
building's electricity budget, which is often a lot easier than trying
to get money to buy a single modern computer. However, they are NOT
efficient ways to get any sort of useful work done. Neither are 133 MHz
586/Pentia or 200 MHz P6 class CPUs. Even a free 400 MHz/bogoMIP PIII
costs $500/year to operate (they tend to burn more like 100 W instead of
50) vs $100 to get the same aggregate MIPS as a P4, making them a break
even proposition on perfectly scalable code over 2 years, NEGLECTING
admin costs. This is pretty much the oldest speed class that it makes
sense to operate for production in an administratively efficient
environment, and even these are pretty much ready to retire.
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Beowulf mailing list, Beowulf at beowulf.org
More information about the Beowulf mailing list | <urn:uuid:593f1e9c-1413-4f75-b7b9-43687f7e69d5> | 2 | 1.984375 | 0.15118 | en | 0.916841 | http://www.clustermonkey.net/pipermail/beowulf/2002-September/029313.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.