text
stringlengths 237
516k
| score
float64 3
4.9
|
---|---|
Water creatures caught stealing DNA
Tiny freshwater organisms that have a sex-free lifestyle, may have survived so well because they steal genes from other creatures, US scientists report.
Researchers from the Harvard University in Cambridge, Massachusetts, have found genes from bacteria, fungi and even plants incorporated into the DNA of bdelloid rotifers - minuscule animals that appear to have given up sex 40 million years ago.
Their report appears in this week's edition of Science.
Sex is used by most life forms as a way of coping with changing circumstances, by allowing organisms to develop useful new genes and ditch harmful, mutated ones.
The resilience of bdelloid and their sex-free lifestyle has stumped scientists.
The team, headed by Professor Matthew Meselson, looked at the DNA of bdelloid rotifers to see how they manage to survive and evolve.
It appears they overcome this hurdle by stealing DNA from our organisms.
"Our result shows that genes can enter the genomes of bdelloids in a manner fundamentally different from that which, in other animals, results from the mating of males and females," says Meselson.
"We found many genes that appear to have originated in bacteria, fungi, and plants."
The translucent, waterborne creatures, which range in size from 0.1 to 1 millimetres long, lay eggs, but all their offspring are female.
The researchers believe that when bdelloids dry out, they fracture their genetic material and rupture cellular membranes. When they rehydrate, they rebuild their genomes and their membranes, incorporating shreds of genetic material from other bdelloids and unrelated species in their vicinity.
"These fascinating animals not only have relaxed the barriers to incorporation of foreign genetic material, but, more surprisingly, they even managed to keep some of these alien genes functional," report co-author Dr Irina Arkhipova says.
According to the researchers, the next step is to determine whether bdelloid genomes also contain homologous genes imported from other bdelloids.
Meselson and his colleagues also hope to examine whether the animals actually use any of the hundreds of snippets of foreign DNA they appear to vacuum up.
Understanding how the animals acquire and make use of these new genes could have implications for medicine.
Genetic mutations, which occur constantly in any living organism, underlie cancer, heart disease and various other diseases. | 3.331276 |
While any kind of dog can attack, some breeds are more prone to attacks than others. In fact, some dogs are more likely than others to kill humans.
The Centers of Disease Control estimates that more than 4.7 million people are bitten by dogs every year. Of those, 20 percent require medical attention.
In a 15-year study (1979-1994) a total of 239 deaths were reported as a result of injuries from dog attacks in the United States. Through its research, the CDC compiled a list of the dogs most responsible for human fatalities. They are as follows:
The study found that most dog-bite-related deaths happened to children. But, according to the CDC there are steps children (and adults) can take cut down the risk of a dog attack from family pets as well as dogs they are not familiar with:
-Don't approach an unfamiliar dog.
-If an unfamiliar dog approaches you, stay motionless.
-Don't run from a dog or scream.
-If a dog knocks you down, roll into a ball and stay still.
-Avoid looking directly into a dog's eyes.
-Leave a dog alone that is sleeping, eating or taking care of puppies.
-Let a dog see and sniff you before petting it.
-Don't play with a dog unless there is an adult present.
-If a dog bites you, tell an adult immediately.
But, the CDC's report says most attacks are preventable in three ways:
1. "Owner and public education. Dog owners, through proper selection, socialization, training, care, and treatment of a dog, can reduce the likelihood of owning a dog that will eventually bite. Male and unspayed/unneutered dogs are more likely to bite than are female and spayed/neutered dogs."
2. "Animal control at the community level. Animal-control programs should be supported, and laws for regulating dangerous or vicious dogs should be promulgated and enforced vigorously. For example, in this report, 30% of dog-bite-related deaths resulted from groups of owned dogs that were free roaming off the owner's property."
3. "Bite reporting. Evaluation of prevention efforts requires improved surveillance for dog bites. Dog bites should be reported as required by local or state ordinances, and reports of such incidents should include information about the circumstances of the bite; ownership, breed, sex, age, spay/neuter status, and history of prior aggression of the animal; and the nature of restraint before the bite incident."
CDC officials did make one important note about its list: The reporting of the breed was subjective. There is no way to determine if the identification of the breed was correct. Also, there is no way to verify if the dog was a purebred or a mixed breed.
Copyright 2011 Scripps Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | 3.52212 |
Science Fair Project Encyclopedia
Battle of the Kentish Knock
The Battle of the Kentish Knock (also known as the Battle of the Zealand Approaches) was a naval battle of the First Anglo-Dutch War fought on 8 October 1652 near the shoal called the Kentish Knock in the North Sea about 30 km from the mouth of the river Thames.
Dutch Admiral Maarten Tromp had been suspended after his failure to bring the English to battle off the Shetland Islands in August, and replaced by Admiral Witte de With, who saw an opportunity to concentrate his forces and gain control of the seas. He set out to attack the English fleet at anchor at the Downs near Dover on 5 October 1652, but the wind was unfavourable.
When the fleets finally met on 8 October, the United Provinces had 57 ships; the Commonwealth of England 68 ships under General at Sea Robert Blake. Action was joined at about 17:00. The English ships were larger and better armed than their opponents and by nightfall two Dutch ships had been captured and about twenty — mostly commanded by captains from Zeeland who resented the domination of Holland — had broken off the engagement. De With withdrew the rest of his force with many casualties.
The Dutch recognized after their defeat that they needed larger ships to take on the English, and instituted a major building program that was to pay off in the Second Anglo-Dutch War.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | 3.50859 |
Science Fair Project Encyclopedia
The sampling frequency or sampling rate defines the number of samples per second taken from a continuous signal to make a discrete signal. The inverse of the sampling frequency is the sampling period or sampling time, which is the time between samples.
The sampling frequency can only be applied to samplers in which each sample is periodically taken. There is no rule that limits a sampler from taking a sample at a non-periodic rate.
If a signal has a bandwidth of 100 Hz then to avoid aliasing the sampling frequency must be greater than 200 Hz.
In some cases, it is desirable to have a sampling frequency more than twice the bandwidth so that a digital filter can be used in exchange for a weaker analog anti-aliasing filter. This process is known as oversampling.
In digital audio, common sampling rates are:
- 8,000 Hz - telephone, adequate for human speech
- 11,025 Hz
- 22,050 Hz - radio
- 44,100 Hz - compact disc
- 48,000 Hz - digital sound used for films and professional audio
- 96,000 or 192,400 Hz - DVD-Audio, some LPCM DVD audio tracks, BD-ROM (Blu-ray Disc) audio tracks, and HD-DVD (High-Definition DVD) audio tracks
In digital video, which uses a CCD as the sensor, the sampling rate is defined the frame/field rate, rather than the notional pixel clock. All modern TV cameras use CCDs, and the image sampling frequency is the repetition rate of the CCD integration period.
- 13.5 MHz - CCIR 601, D1 video
- Continuous signal vs. Discrete signal
- Digital control
- Sample and hold
- Sample (signal)
- Sampling (information theory)
- Signal (information theory)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | 3.981122 |
In one form or another, the sustainment warfighting
function described in Field Manual (FM) 3–0,
Operations, has been an essential feature of the Army’s operational past since at least World War I. The sustainment concept was institutionalized in March 1942 as part of a massive Army reorganization that accompanied the entry of the United States into World War II. Driven by Chief of Staff of the Army General George C. Marshall, the reorganization aimed to reduce the number of officers and organizations that had immediate access to him. The resulting reorganization restructured the Army into three major commands: the Army Ground Forces (AGF), the Army Air Forces (AAF), and a command initially called the Services of Supply (SOS)—the Army’s sustainment command. Everything that did not fit clearly into the AGF or the AAF went to the SOS. Lieutenant General Brehon B. Somervell was selected to command the SOS organization.
Army Service Forces
In March 1943, the War Department staff renamed the SOS the “Army Service Forces” (ASF) because they thought the word “supply” did not accurately reflect the broad range of activities that had been assigned to the command. At the War Department level, the ASF was a consolidation of logistics, personnel, and administrative functions. Under ordinary circumstances, these functions were the responsibility of the War Department G–4 and G–1, who relied on the technical and operational support of the Finance, Judge Advocate General’s, and Adjutant General’s Departments; the Chaplain Corps; Inspector General; Provost Marshal General; and Chief, Special Services.
Nothing about the ASF organization was simple or uncomplicated. As recorded in the Army’s official history of the organization, the ASF was without “direct precedent” and unusual “in the variety of tasks entrusted to it. . . . [I]t was a hodgepodge of agencies with many and varied functions.” From the beginning until it was disestablished in 1946, “the ASF struggled constantly to build a common unity of purpose and organization.” Lieutenant General Somervell, a career logistician, admitted never liking the part of the reorganization that gave him responsibility for personnel. He gave most of his attention to the monumental task of procurement and supply.
However “hodgepodge” it may have been, the ASF survived the war, fulfilling its massive responsibility of supporting the millions of U.S. Soldiers located all over the globe in multiple theaters of operations. One unifying factor that kept Somervell on task and held the ASF together was the obligation to sustain warfighting commanders and the Soldiers who served them. If unity of purpose was lost to the ASF organization, the ASF gained from efficiencies resulting from the unified effort to sustain our Soldiers at war.
Combat Service Support Group
Following World War II, the Army began establishing combat development agencies as a way for each branch of the Army to integrate new technologies and tactical organizations into the combat Army. Ultimately, all combat development agencies were realigned under a unified Combat Developments Command (CDC) in 1962 as part of an extensive reorganization of the Army. The CDC established two combat development “integrating agencies” modeled after the mission and functions of the AGF and ASF of World War II. One agency integrated the development of combat and combat support functions, and the other, the Combat Service Support Group, acted as integrator for what we today would call the sustainment function.
The combat development agencies of the Adjutant General’s, Finance, Judge Advocate General’s, and Chaplain branches were joined with the various logistics combat development agencies of the Quartermaster, Ordnance, and Transportation branches to form the Combat Service Support Group, headquartered at Fort Lee, Virginia. Corresponding with the larger Army reorganization, the Army Command and General Staff College adopted the concept of combat service support to identify the varied, yet related, functions that together defined the sustainment mission. In its essence, the Combat Service Support Group represented a reconstitution of the sustainment concept embedded in the ASF of World War II. The CDC managed the Army’s total combat development effort until the end of the Vietnam War.
Personnel Issues During the Vietnam War
Following the Vietnam War and the gut-wrenching realization that many of the Army’s most serious operational issues were related to the “personnel system,” senior leaders of the Army began to question the ASF model that had framed the sustainment concept since the beginning of World War II. Early in the Vietnam War, it had taken the wife of an Army battalion commander embroiled in the Battle of Ia Drang Valley to convince senior Pentagon officials that yellow-cab delivery of casualty notification telegrams to Soldiers’ next-of-kin was deeply insensitive and destructive of homefront morale. The draft, used to sustain manpower levels in the Vietnam War, had embittered many who objected to conscription on principle and others who believed it forced into service a disproportionate number of poor, working-class, and minority members of U.S. society. Racial problems in society at large had been magnified in the military by the collapsing public support for the war. Drug and alcohol abuse among military personnel was rampant.
Replacement and rotation policies that caused constant personnel turbulence had undermined unit integrity and the commitment of Soldiers to one another and the mission. Perceived failings of command in Vietnam gave rise to the study of military leadership and the historical and ethical foundations of the military profession. Together with the dissolution of the draft, the advent of the all-volunteer Army, and the commitment to more thoroughly integrate women into the force, the personnel lessons of the Vietnam War created a highly charged environment conducive to a full-scale assault on the Army’s personnel system.
Army Training and Doctrine Command
Emerging from the many discussions concerning the personnel lessons learned from the Vietnam War were plans to establish a “clearing house” (an administrative center or school complex) that would form the center of gravity for an Army-wide personnel system. The opportunity to establish an agency of this kind came with Operation Steadfast, the 1973 reorganization of the Army that disestablished the Continental Army Command and the Combat Developments Command. From Operation Steadfast came two new commands, the Army Training and Doctrine Command (TRADOC) and the Army Forces Command.
TRADOC, as the name implied, became responsible for Army training, doctrine, and combat developments. At the core of the new TRADOC organization were three mid-level “integrating centers” for combat developments: the Combined Arms Center (CAC) at Fort Leavenworth, Kansas; the Logistics Center (LOGC) at Fort Lee; and the Administration Center (ADMINCEN) at Fort Benjamin Harrison, Indiana. CAC and LOGC were essentially re-creations of former Combat Developments Command operating agencies; ADMINCEN was a new organization altogether.
Based partly on lessons from the Vietnam experience, planners intended ADMINCEN to become the collection point for all matters related to the Army’s personnel system and the human dimension of military operations. It was a kind of doctrinal “think tank” and training ground that directly extended from the mission of the Army G–1 and its associated branches and specialties.
Considerable resistance to ADMINCEN was voiced by members of the Operation Steadfast study group, who balked at the idea of elevating personnel doctrine, training, and combat developments to near-equal status with the combined arms and logistics missions. However, the Continental Army Command commander, General Ralph E. Haines, Jr., directed that ADMINCEN be included in the detailed plan of reorganization. The establishment of ADMINCEN reflected the view of General Haines and other senior military officials that a refashioned personnel system was critical to restoring public confidence in the Army, recovering from the war’s assault on Soldier morale and unit cohesion, and building an all-volunteer force.
Chief of Staff of the Army General Creighton W. Abrams, Jr., testifying before the Senate Appropriations Committee in March 1974, called the management of human resources the Army’s “single most important function. . . . Unless we run our people programs well, the Army itself will not be well.” Likewise, Lieutenant General Bernard W. Rogers, then the Army’s Deputy Chief of Staff for Personnel, began to take a hard look at the way the Army managed its people. He said that the Army’s personnel system should “provide in the right place at the right time the required number of qualified, motivated people to accomplish the Army’s mission, and to provide for their maintenance and care as well as that of their dependents.”
As the Army’s focal point for personnel and personnel systems, ADMINCEN became the proponent for a new category of military operations called personnel service support (PSS). In July 1973, the ADMINCEN was activated at Fort Benjamin Harrison. The Personnel and Administration Combat Development Activity, ADMINCEN’S combat development activity, assumed responsibility for integrating the doctrine, organization, and equipment developments of the Adjutant General’s, Finance, Chaplain, Judge Advocate General’s, Medical Service, and Women’s Army Corps. The Personnel and Administration Combat Development Activity’s integrating mission also included the Defense Information School (for public affairs) and the Army School of Music (for Army bands).
The three-center model, which was the basis for TRADOC’s organization, constituted a restructuring of the sustainment model that had been in place since the Army reorganized for World War II. Instead of the one-piece model, Operation Steadfast institutionalized a two-piece model—one piece to address logistics functions and another for personnel and administration.
Much like ASF of old, ADMINCEN became a magnet for every developmental mission and program that did not fit clearly into either combat and combat support (CAC’s focus) or logistics (LOGC’s focus) mission areas. Also like ASF, ADMINCEN struggled from the beginning to build a commonly held vision and understanding of purpose and mission. During the command’s 17-year history, it went through no less than 10 major reorganizations, each hoping to build a unity of purpose that had eluded it from the very beginning. In 1980, ADMINCEN reorganized into the Army Soldier Support Center as a result of the mandate to manage and develop programs related to the human dimension of military operations.
Soldier Support Institute
The collapse of the Soviet Union and the end of the Cold War in the late 1980s brought immediate demands from Congress and the public at large to radically reduce the defense budget and take advantage of the “peace dividend.” Those demands essentially called for the demobilization of the Nation’s defense structure that had been built to deter Soviet and Communist aggression around the world. The war against Iraq in 1990 and 1991 interrupted the debate but did little to alter the political intent to reduce deficit spending and shift public funds formerly allocated for defense to other areas.
TRADOC’s initial response to the reality of post-Cold War military budgets was to “reengineer” its combat development program. A significant piece of the plan called for eliminating the Army Soldier Support Center by consolidating it with LOGC at Fort Lee. The resulting organization, the Army Combined Arms Support Command (CASCOM), like the Combat Service Support Group before it, assumed responsibility for the combat, doctrine, and training developments of the Army’s logistics and personnel and administrative functional areas. The Soldier Support Center was reduced to a “schools” center, the Army Soldier Support Institute, which included the Adjutant General, Finance, and Recruiting and Retention Schools and a Noncommissioned Officer Academy.
The May 1990 CASCOM organization plan went through four phases and took 4 years to complete. Under phase 1 of the plan, people and funds supporting the PSS integrating mission were transferred to CASCOM. The final phase of the project called for the transfer of combat and training development programs of the Ordnance Center and Schools at Aberdeen Proving Ground, Maryland, and Redstone Arsenal, Alabama, and the Transportation School at Fort Eustis, Virginia, to Fort Lee to be consolidated with like assets from the Quartermaster School. The Ordnance and Transportation Schools, however, continued to provide classroom instruction at their original locations. The consolidation marked the elevation of LOGC from an integrating center to an agency responsible also for capability and training developments for the logistics community (the Ordnance, Transportation, and Quartermaster Schools).
Since the Soldier Support Institute was in the process of moving from Fort Benjamin Harrison to Fort Jackson, South Carolina, under a Defense Base Closure and Realignment (BRAC) Commission mandate, the combat and training development assets of the Soldier Support Institute were exempted from the move to Fort Lee. The people and programs that would have moved to Fort Lee were already committed to moving to Fort Jackson and the multimillion dollar facilities that were being constructed there to receive them.
Problems With Integration Under CASCOM
Senior leaders of the Army’s personnel and finance communities were also concerned that capability and training development support for the Adjutant General and Finance Schools would largely disappear in an organization committed largely to the Army’s logistics mission. Many of the Army-wide personnel programs formerly sponsored by the Soldier Support Center began to flounder with the transfer of the PSS integrating mission to CASCOM.
At issue was the family of human resource programs belonging to no particular branch of the Army but closely connected to the Army’s Deputy Chief of Staff for Personnel. The Soldier Support Center in the early 1980s, for instance, sponsored the development and integration of the Army’s new manning system and the follow-on regimental system intended to strengthen unit cohesion and the bonds of affiliation that tied Soldiers to particular units and Army branches. Much of the justification for the establishment of the Army Community and Family Support Center in 1984 resulted from the Soldier Support Center’s sponsorship of an expanded Army Community Services program and various studies and programs related to the impact of Soldiers’ service and sacrifice on Army families.
Under the transfer of the integrating function, statutory responsibility for human resources had been vested with CASCOM, the responsible agent for integrating both logistics and personnel issues across the Army. However, one of the first issues to confront the
commandant of the Adjutant General School in 1994 was whether the Army’s Adjutant General’s Corps ought to assume responsibility for equal opportunity (EO) and other related human resources programs. Knowing that the Army’s Deputy Chief of Staff for Personnel needed a TRADOC advocate for human resources, the Adjutant General School commandant absorbed the EO mission into the Adjutant General’s Corps’ doctrine, training, and combat developments program. In taking responsibility for other human resources programs, the Adjutant General’s Corps, as the technical proponent for the Army’s personnel system, had broadened its mission to include responsibility for “people” programs and other human-dimension programs that were formerly a part of the Soldier Support Center’s capabilities development integrating mission.
|A Soldier with the 147th Adjutant General Postal Company from Kaiserslautern, Germany, inspects a box that a Soldier is sending home from Iraq.
In 1993, TRADOC published its first attempt at post-Cold War operational doctrine: FM 100–5, Operations. The 1993 version of FM 100–5 listed six critical logistics functions that together constituted combat service support. Of the six, two addressed the former PSS functional area. The chapter titled “Manning the Force” described personnel readiness management, replacement management, and casualty management. The chapter titled “Sustaining Soldiers and their Systems” included health service support, personnel services, financial services, public affairs, and religious and legal support.
For leaders and Soldiers belonging to the personnel and administrative areas of the Army mission, the interchangeable use of the terms “logistics” and “combat service support” validated previous predictions about CASCOM’s narrow focus on logistics. Sustainment functions falling within the combat service support functional area but outside the logistics domain had become afterthoughts.
|A Soldier who serves as a debt management and
special action noncommissioned officer for the 101st Finance Company, 10th Sustainment Brigade
Troops Battalion, files his daily paperwork.
The Sustainment Warfighting Function
The most recent version of Army operational doctrine, FM 3–0, Operations, resolves previous exclusionary problems caused by definitions by rescinding the terms “combat arms,” “combat support,” and “combat service support,” which described the three functional areas represented in planning and conducting a military operation. In their place, the FM names eight elements of combat power: leadership, information, movement and maneuver, fires, intelligence, command and control, protection, and sustainment. These are believed to be a more accurate reflection of the contemporary, if not the past, operating environment.
Together, the eight elements of combat power point to a new and broader understanding of combined arms operations. Instead of the narrow combination of weapon systems, the new definition applies leadership and information and selected warfighting functions in a “synchronized and simultaneous” fashion to achieve the “full destructive, disruptive, informational, and constructive potential” of combat power.
Sustainment, one of the six warfighting functions, has replaced combat service support as the approved concept used to describe the collective tasks and related logistics, personnel services, and health services systems essential to support the operational Army in the fulfillment of a given mission. From a branch and specialty perspective, sustainment involves the combined functions and capabilities provided by the Adjutant General’s, Chaplain, Finance, Judge Advocate General’s, Medical Service, Ordnance, Quartermaster, and Transportation Corps. Based on recent experience, our new doctrine is a candid admission that successful military operations in the full-spectrum environment of the 21st century require a measured, combined, and focused application of the various elements of combat power. Regardless of size and scope, the sustainment community’s ability to provide commanders at the right time and place with all the logistics, personnel, and health services support necessary for mission accomplishment is essential to the success of any future operation.
On 9 January 2009, officials at Fort Lee, Virginia, dedicated the new Sustainment Center of Excellence (SCoE). Established as the result of BRAC decisions, the SCoE represents a further consolidation of CASCOM, the Army Logistics University (formerly the Army Logistics Management College), and the Army Quartermaster, Transportation, and Ordnance Schools. As part of the BRAC plan, the students, faculty, and staff of the Ordnance Mechanical Maintenance School at Aberdeen Proving Ground, the Ordnance Munitions and Electronics Maintenance School at Redstone Arsenal, and the Transportation School at Fort Eustis will move to Fort Lee. The new organization represents a complete consolidation of the logistics community’s doctrine, training, and combat development programs.
SCoE is indeed about the future of logistics and the logistics branches, but it is also about the other elements of the sustainment function—the branches and missions that make up the personnel services and health service support functions. Based on our new doctrine, SCoE also represents our best opportunity in years to unify the effort as well as create a common understanding of purpose that bridges the diverse programs and missions that make up the Army’s total sustainment community. Much of our success as a community will depend on ensuring the proper alignment and integration of non-logistics units and personnel that are currently being added to our theater and expeditionary sustainment commands and sustainment brigades. They, too, are critically necessary for freeing commanders for action, extending operational reach, and prolonging the endurance of our Soldiers, who respond to any and all threats that compromise the safety and well-being of the American people. | 3.445105 |
Background information about dementia and home care services
In Spain, the provision of home care services is in the stage of development with about 20% of communes offering such services. However, this is not sufficient to cover demand and it is estimated that only about 1% of the elderly receive home care services provided by the government. The main aim of social services network is to keep elderly people in their homes for as long as possible.
The vast majority of elderly dependent people have to rely on services provided by informal carers. Care of elderly and dependent people tends to be seen as a family obligation. However, according to a survey carried out in 2001, only 24% of the population believe that children will continue to bear the responsibility for caring for their elderly parents in the future and the number of elderly people living alone is steadily increasing (Larizgoitia Jauregi, 2004)
Legislation relating to the provision of home care services
The Spanish constitution states that all citizens are entitled to “health protection”. The General Health Law of 1986, which saw the creation of the National Health System, also states that access to health services is a citizen’s right.
In the Spanish Civil Code (Book 1), it is stated that the spouse and children of elderly dependent people are responsible for their maintenance and care which covers everything that is essential for sustenance, shelter, clothing and medical assistance. The extent of the maintenance to be provided depends on the means of the providers and the needs of the dependent person. The obligation to provide maintenance comes to an end when the provider dies or when their wealth has fallen to such a level that continuing to do so would mean having to neglect their own needs or those of their family.
Brothers and sisters also have an obligation to provide maintenance but they come after spouses and descendants, but this obligation is limited to what is absolutely necessary Kerschen et al., 2005).
Citizens’ do not have a legally established right to social services. The provision of such services is at the discretion of the Autonomous Administration. Access rights are governed by legislation at the level of the autonomous communities.
The main criterion of the social service network is to keep the elderly in their own environment for as long as possible. The main social services are therefore aimed at maintenance in the home. There is also a residential type network. These services generally concentrate on attending the dependent elderly who live alone. The need is also recognised to help subjects with few resources.
Organisation and financing of home care services
Health care services are organised by the autonomous communities. Each community has a Health Service and draws up a Health Plan which outlines which activities are necessary in order to meet the objectives of its own Health Service. Amongst other services provided by the health services of the autonomous communities, there is primary care which includes health care in the home and care specifically for the elderly.
Home care services are free for people who are on the minimum pension. People who have an income twice as high as the minimum pension must pay for the services whereas those on an intermediary income must pay a certain amount which is calculated on the basis of their income.
Health care is funded exclusively through general taxation and not through social security contributions. Home social services are financed jointly by the Ministry of Social Affairs, the regional ministries of Social Welfare and the municipalities. Home visits by general practitioners and primary care nurses are funded through the Public Health Service. In addition to government provided services, voluntary associations and not-for-profit associations such as the Red Cross also provide social home care services (Carrillo, 2005).
Kinds of home care services available
Home care services include primary care social services, social work, assistance with household tasks, meals-on-wheels and tele-alarm services. However, these services are not available in all the autonomous communities.
In practice, home care services are more or less limited to household tasks (which also includes laundry and shopping). This seems to be based on the choice of the elderly people many of whom think that personal care should be carried out by the family. This opinion seems to be shared by carers who often prefer to receive formal assistance with household tasks rather than personal care (Valderrama et al., 1997 in Larizgoitia Jauregi, 2004).
Meals-on-wheels is a services that is only available in the cities of Malaga and Cordoba Andalusia) and in the city of Lerida (in Catalonia). Teleassistance and telealarm services are offered in at least 10 of the autonomous communities. In Andalusia, Castilla-Leon, Valencia, a service exists which consists of helping to adapt the home to the needs of the dependent person. (Imserso 2004 in Larizgoitia Jauregi, 2004).
- Esteban Carrillo (2005), WHHO International Compendium of Home Health Care : http://www.nahc.org/WHHO/WHHOcomptext.html
- Kerschen, N. et al. (2005), Long-term care for older persons. In Long-term care for Older People – conference organised by the Luxembourg Presidency with the Social Protection Committee of the European Union, Luxembourg, 12-13 May 2005
- Larizgoitia Jauregi, A. (2004), National Background Report for Spain , EUROFAMCARE: http://www.uke.uni-hamburg.de/extern/eurofamcare/documents/nabare_spain_rc1_a4.pdf
- Ylieff, M. et al. (2005), Rapport international – les aides et les soins aux personnes dementes dans les pays de la communauté européenne, Qualidem, Universities of Liège and Leuven.
Last Updated: mercredi 15 juillet 2009 | 3.261018 |
October 2004 | Volume 55, Issue 5
As America goes into its fifty-fifth presidential election, we should remember that there might have been only one—if we hadn’t had the only candidate on earth who could do the job
Looking back over two hundred years of the American Presidency, it seems safe to say that no one entered the office with more personal prestige than Washington, and only two Presidents—Abraham Lincoln and Franklin Roosevelt—faced comparable crises. The Civil War and the Great Depression, though now distant in time, remain more recent and raw in our collective memory than the American founding, so we find it easier to appreciate the achievements of Lincoln and Roosevelt. Washington’s achievement must be recovered before it can be appreciated, which means that we must recognize that there was no such thing as a viable American nation when he took office as President, that the opening words of the Constitution (“We the people of the United States”) expressed a fervent but fragile hope rather than a social reality. The roughly four million settlers spread along the coastline and streaming over the Alleghenies felt their primary allegiance—to the extent they felt any allegiance at all —to local, state, and regional authorities. No republican government had ever before exercised control over a population this diffuse or a land this large, and the prevailing assumption among the best-informed European observers was that, to paraphrase Lincoln’s later formulation, a nation so conceived and so dedicated could not endure.
Not much happened at the Executive level during the first year of Washington’s Presidency, which was exactly the way he wanted it. His official correspondence was dominated by job applications from veterans of the war, former friends, and total strangers. They all received the same republican response—namely, that merit rather than favoritism must determine federal appointments. As for the President himself, it was not clear whether he was taking the helm or merely occupying the bridge. Rumors began to circulate that he regarded his role as primarily ceremonial and symbolic, that after a mere two years he intended to step down, having launched the American ship of state and contributed his personal prestige as ballast on its maiden voyage.
As it turned out, even ceremonial occasions raised troubling questions because no one knew how the symbolic centerpiece of a republic should behave or even what to call him. Vice President John Adams, trying to be helpful, ignited a fiery debate in the Senate by suggesting such regal titles as “His Elective Majesty” and “His Mightiness,” which provoked a lethal combination of shock and laughter, as well as the observation that Adams himself should be called “His Rotundity.” Eventually the Senate resolved on the most innocuous option available: The President of the United States should be called exactly that. Matters of social etiquette—how the President should interact with the public, where he should be accessible and where insulated—prompted multiple memorandums on the importance of what Alexander Hamilton called “a pretty high tone” that stopped short of secluding the President entirely. The solution was a weekly open house called the levee, part imperial court ceremony with choreographed bows and curtsies, part drop-in parlor social. The levee struck the proper middle note between courtly formality and republican simplicity, though at the expense of becoming a notoriously boring and wholly scripted occasion.
The very awkwardness of the levee fitted Washington’s temperament nicely since he possessed a nearly preternatural ability to remain silent while everyone around him was squirming under the pressure to fill that silence with conversation. (Adams later claimed that this “gift of silence” was Washington’s greatest political asset, which Adams deeply envied because he lacked it altogether.) The formal etiquette of the levee and Washington’s natural dignity (or was it aloofness?) combined to create a political atmosphere unimaginable in any modern-day national capital. In a year when the French Revolution broke out in violent spasms destined to reshape the entire political landscape of Europe, and the new Congress ratified a Bill of Rights that codified the most sweeping guarantee of individual freedoms ever enacted, no one at the levees expected Washington to comment on those events.
Even matters of etiquette and symbolism, however, could have constitutional consequences, as Washington learned in August of 1789. The treaty-making power of the President required that he seek “the Advice and Consent of the Senate.” Washington initially interpreted the phrase to require his personal appearance in the Senate and the solicitation of senatorial opinion on specific treaty provisions in the mode of a large advisory council. But when he brought his proposals for treaties with several Southern Indian tribes to the Senate, the debate became a prolonged shouting match over questions of procedure. The longer the debate went on, the more irritated Washington became. Finally he declared, “This defeats every purpose of my coming here,” and abruptly stalked out. From that time on, the phrase advice and consent meant something less than direct Executive solicitation of senatorial opinion, and the role of the Senate as an equal partner in the Grafting of treaties came to be regarded as a violation of the separation-of-powers principle.
Though he never revisited the Senate, Washington did honor his pledge to visit all the states in the Union. In the fall of 1789 he set off on a tour of New England that carried him through 60 towns and hamlets. Everywhere he went, the residents turned out in droves to glimpse America’s greatest hero parading past. And everywhere he went, New Englanders became Americans. Since Rhode Island had not yet ratified the Constitution, he skipped it, then made a separate trip the following summer to welcome the proudly independent latecomer into the new nation. During a visit to the Jewish synagogue in Newport he published an address on religious freedom that turned out to be the most uncompromising endorsement of the principle he ever made. (One must say “made” rather than “wrote” because there is considerable evidence that Thomas Jefferson wrote it.) Whatever sectional suspicions New Englanders might harbor toward that faraway thing called the federal government, when it appeared in their neighborhoods in the form of George Washington, they saluted, cheered, toasted, and embraced it as their own.
The Southern tour was a more grueling affair, covering nearly 2,000 miles during the spring of 1791. Instead of regarding it as a threat to his health, however, Washington described it as a tonic; the real risk, he believed, was the sedentary life of a deskbound President. The entourage of 11 horses included his white parade steed, Prescott, whom he mounted at the edge of each town in order to make an entrance that accorded with the heroic mythology already surrounding his military career. Prescott’s hooves were painted and polished before each appearance, and Washington usually brought along his favorite greyhound, mischievously named Cornwallis, to add to the dramatic effect. Like a modern political candidate on the campaign trail, Washington made speeches at each stop that repeated the same platitudinous themes, linking the glory of the War for Independence with the latent glory of the newly established United States. The ladies of Charleston fluttered alongside their fans when Washington took the dance floor; Prescott and the four carriage horses held up despite the nearly impassable or even nonexistent roads; Cornwallis, however, wore out and was buried on the banks of the Savannah River in a brick vault with a marble tombstone that local residents maintained for decades as a memorial to his master’s visit. In the end all the states south of the Potomac could say they had seen the palpable version of the flag, Washington himself.
During the Southern tour one of the earliest editorial criticisms of Washington’s embodiment of authority appeared in the press. He was being treated at each stop like a canonized American saint, the editorial complained, or perhaps like a demigod “perfumed by the incense of addresses.” The complaint harked back to the primordial fear haunting all republics: “However highly we may consider the character of the Chief Magistrate of the Union, yet we cannot but think the fashionable mode of expressing our attachment... favors too much of Monarchy to be used by Republicans, or to be received with pleasure by the President of a Commonwealth.”
Such doubts were rarely uttered publicly during the initial years of Washington’s Presidency. But they lurked in the background, exposing how double-edged the political imperatives of the American Revolution had become. To secure the revolutionary legacy on the national level required a person who embodied national authority more visibly than any collective body like Congress could convey. Washington had committed himself to playing that role by accepting the Presidency. But at the core of the Revolutionary legacy lay a deep suspicion of any potent projection of political power by a “singular figure.” And since the very idea of a republican Chief Executive was a novelty, there was no vocabulary for characterizing such a creature except the verbal tradition surrounding European courts and kings. By playing the part he believed history required, Washington made himself vulnerable to the most virulent apprehensions about monarchical power.
He could credibly claim to be the only person who had earned the right to be trusted with power. He could also argue, as he did to several friends throughout his first term, that no man was more eager for retirement, that he sincerely resented the obligations of his office as it spread a lengthening shadow of public responsibility over his dwindling days on earth. If critics wished to whisper behind his back that he looked too regal riding a white stallion with a leopard-skin cloth and gold-rimmed saddle, so be it. He knew he would rather be at Mount Vernon. In the meantime he would play his assigned role as America’s presiding presence: as so many toasts in his honor put it, “the man who unites all hearts.”
Exercising Executive authority called for completely different talents than symbolizing it. Washington’s administrative style had evolved through decades of experience as master of Mount Vernon and commander of the Continental Army. (In fact, he had fewer subordinates to supervise as President than he had had in those earlier jobs.) The Cabinet system he installed represented a civilian adaptation of his military staff, with Executive sessions of the Cabinet resembling the councils of war that had provided collective wisdom during crises. As Thomas Jefferson later described it, Washington made himself “the hub of the wheel,” with routine business delegated to the department heads at the rim. It was a system that maximized Executive control while also creating essential distance from details. Its successful operation depended upon two skills that Washington had developed over his lengthy career: first, identifying and recruiting talented and ambitious young men, usually possessing formal education superior to his own, then trusting them with considerable responsibility and treating them as surrogate sons in his official family; second, knowing when to remain the hedgehog who keeps his distance and when to become the fox who dives into the details.
On the first score, as a judge of talent, Washington surrounded himself with the most intellectually sophisticated collection of statesmen in American history. His first recruit, James Madison, became his most trusted consultant on judicial and Executive appointments and his unofficial liaison with Congress. The precocious Virginian was then at the peak of his powers, having just completed a remarkable string of triumphs as the dominant force behind the nationalist agenda at the Constitutional Convention and the Virginia ratifying convention, as well as being co-author of The Federalist Papers . From his position in the House of Representatives he drafted the address welcoming Washington to the Presidency, then drafted Washington’s response to it, making him a one-man shadow government. Soon after the inaugural ceremony he showed Washington his draft of 12 amendments to the Constitution, subsequently reduced to 10 and immortalized as the Bill of Rights. Washington approved the historic proposal without changing a word and trusted Madison to usher it through Congress with his customary proficiency.
One of Madison’s early assignments was to persuade his reluctant friend from Monticello to serve as Secretary of State. Thomas Jefferson combined nearly spotless Revolutionary credentials with five years of diplomatic experience in Paris, all buoyed by a lyrical way with words and ideas most famously displayed in his draft of the Declaration of Independence.
Alexander Hamilton was the third member of this talented trinity and probably the brightest of the lot. While Madison and Jefferson had come up through the Virginia school of politics, which put a premium on an understated style that emphasized indirection and stealth, Hamilton had come out of nowhere (actually, impoverished origins in the Caribbean) to display a dashing, out-of-my-way style that imposed itself ostentatiously. As Washington’s aide-de-camp during the war, he had occasionally shown himself to be a headstrong surrogate son, always searching for an independent command beyond Washington’s shadow. But his loyalty to his mentor was unquestioned, and his affinity for the way he thought was unequaled. Moreover, throughout the 1780s Hamilton had been the chief advocate for fiscal reform as the essential prerequisite for an energetic national government, making him the obvious choice as Secretary of Treasury once Robert Morris had declined.
The inner circle was rounded out by three appointments of slightly lesser luster. Gen. Henry Knox, appointed Secretary of War, had served alongside Washington from Boston to Yorktown and had long since learned to subsume his own personality so thoroughly within his chief’s that disagreements became virtually impossible. More than just a cipher, as some critics of Washington’s policies later claimed, Knox joined Vice President Adams as a seasoned New England voice within the councils of power. John Jay, the new Chief Justice, added New York’s most distinguished legal and political mind to the mix, and also extensive foreign policy experience. As the first Attorney General, Edmund Randolph lacked Jay’s gravitas and Knox’s experience, but his reputation for endless vacillation was offset by solid political connections within the Tidewater elite, reinforced by an impeccable bloodline. Washington’s judgment of the assembled team was unequivocal. “I feel myself supported by able co-adjutors,” he observed in June of 1790, “who harmonize extremely well together.”
In three significant areas of domestic policy, each loaded with explosive political and constitutional implications, Washington chose to delegate nearly complete control to his “co-adjutors.” Although his reasons for maintaining a discreet distance differed in each case, they all reflected his recognition that Executive power still lived under a monarchical cloud of suspicion and could be exercised only selectively. Much like his Fabian role during the war, when he learned to avoid an all-or-nothing battle with the British, choosing when to avoid conflict struck him as the essence of effective Executive leadership.
The first battle he evaded focused on the shape and powers of the federal courts. The Constitution offered even less guidance on the judiciary than it did on the Executive branch. Once again the studied ambiguity reflected apprehension about any projection of federal power that upset the compromise between state and federal sovereignty. Washington personally preferred a unified body of national law, regarding it as a crucial step in creating what the Constitution called “a more perfect union.” In nominating Jay to head the Supreme Court, he argued that the federal judiciary “must be considered as the Key-Stone of our political fabric” since a coherent court system that tied the states and regions together with the ligaments of law would achieve more in the way of national unity than any other possible reform.
But that, of course, was also the reason it proved so controversial. The debate over the Judiciary Act of 1789 exposed the latent hostility toward any consolidated court system. The act created a six-member Supreme Court, 3 circuit courts, and 13 district courts but left questions of original or appellate jurisdiction intentionally blurred so as to conciliate the advocates of state sovereignty. Despite his private preferences, Washington deferred to the tradeoffs worked out in congressional committees, chiefly a committee chaired by Oliver Ellsworth of Connecticut, which designed a framework of overlapping authorities that was neither rational nor wholly national in scope. In subsequent decades John Marshall, Washington’s most loyal and influential disciple, would move this ambiguous arrangement toward a more coherent version of national law. But throughout Washington’s Presidency the one thing the Supreme Court could not be, or appear to be, was supreme, a political reality that Washington chose not to contest.
A second occasion for calculated Executive reticence occurred in February of 1790 when the forbidden subject of slavery came before Congress. Two Quaker petitions, one arguing for an immediate end to the slave trade, the other advocating the gradual abolition of slavery itself, provoked a bitter debate in the House. The petitions would almost surely have been consigned to legislative oblivion except for the signature of Benjamin Franklin on the second one, which transformed a beyond-the-pale protest into an unavoidable challenge to debate the moral compatibility of slavery with America’s avowed Revolutionary principles. In what turned out to be his last public act, Franklin was investing his enormous prestige to force the first public discussion of the sectional differences over slavery at the national level. (The debates at the Constitutional Convention had occurred behind closed doors, and their records remained sealed.) If only in retrospect, the discussions in the House during the spring of 1790 represented the Revolutionary generation’s final opportunity to place slavery on the road to ultimate extinction.
Washington shared Franklin’s view of slavery as a moral and political anachronism. On three occasions during the 1780s he let it be known that he favored adopting some kind of gradual emancipation scheme and would give his personal support to such a scheme whenever it materialized. Warner Mifflin, one of the Quaker petitioners who knew of Washington’s previous statements, obtained a private interview in order to plead that the President step forward in the manner of Franklin. As the only American with more prestige than Franklin, Washington could make the decisive difference in removing this one massive stain on the Revolutionary legacy, as well as on his own.
We can never know what might have happened if Washington had taken this advice. He listened politely to Mifflin’s request but refused to commit himself, on the grounds that the matter was properly the province of Congress and “might come before me for official decision.” He struck a more cynical tone in letters to friends back in Virginia: ”. . . the introduction of the Quaker Memorial, rejecting slavery, was to be sure, not only an ill-judged piece of business, but occasioned a great waste of time.” He endorsed Madison’s deft management of the debate and behind-the-scenes maneuvering in the House, which voted to prohibit any further consideration of ending the slave trade until 1808, as the Constitution specified; more significantly, Madison managed to take slavery off the national agenda by making any legislation seeking to end it a state rather than federal prerogative. Washington expressed his satisfaction that the threatening subject “has at last been put to sleep, and will scarcely awake before the year 1808.”
What strikes us as a poignant failure of moral leadership appeared to Washington as a prudent exercise of political judgment. There is no evidence that he struggled over the decision. Whatever his personal views on slavery may have been, his highest public priority was the creation of a unified American nation. The debates in the House only dramatized the intractable sectional differences he had witnessed from the chair at the Constitutional Convention. They reinforced his conviction that slavery was the one issue with the political potential to destroy the republican experiment in its infancy.
Finally, in the most dramatic delegation of all, Washington gave total responsibility for rescuing the debt-burdened American economy to his charismatic Secretary of the Treasury. Before Hamilton was appointed, in September of 1789, Washington requested financial records from the old confederation government and quickly discovered that he had inherited a messy mass of state, domestic, and foreign debt. The records were bedeviled by floating bond rates, complicated currency conversion tables, and guesswork revenue projections that, taken together, were an accountant’s worst nightmare. After making a heroic effort of his own that merely confirmed his sense of futility, Washington handed the records and fiscal policy of the new nation to his former aide-de-camp, who turned out to be, among other things, a financial genius.
Hamilton buried himself in the numbers for three months, then emerged with a 40,000-word document titled Report on Public Credit. His calculations revealed that the total debt of the United States had reached the daunting (for then) size of $77.1 million, which he divided into three separate ledgers: foreign debt ($11.7 million), federal debt ($40.4 million), and state debt ($25 million). Several generations of historians and economists have analyzed the intricacies of Hamilton’s Report and created a formidable body of scholarship on its technical complexities, but for our purposes it is sufficient to know that Hamilton’s calculations were accurate and his strategy simple: Consolidate the messy columns of foreign and domestic debt into one central pile. He proposed funding the federal debt at par, assuming all the state debts, then creating a national bank to manage all the investments and payments at the federal level.
This made excellent economic sense, as the resultant improved credit rating of the United States in foreign banks and surging productivity in the commercial sector demonstrated. But it also proved to be a political bombshell that shook Congress for more than a year. For Hamilton had managed to create, almost single-handedly, an unambiguously national economic policy that presumed the sovereign power of the federal government. He had pursued a bolder course than the more cautious framers of the Judiciary Act had followed in designing the court system, leaving no doubt that control over fiscal policy would not be brokered to accommodate the states. All three ingredients in his plan—funding, assumption, and the bank—were vigorously contested in Congress, with Madison leading the opposition. The watchword of the critics was consolidation, an ideological cousin to monarchy.
Washington did not respond. Indeed, he played no public role at all in defending Hamilton’s program during the fierce congressional debates. For his part, Hamilton never requested presidential advice or assistance, regarding control over his own bailiwick as his responsibility. A reader of their correspondence might plausibly conclude that the important topics of business were the staffing of lighthouses and the proper design of Coast Guard cutters to enforce customs collections. But no public statements were necessary, in part because Hamilton was a one-man army in defending his program, “a host unto himself,” as Jefferson later called him, and by February of 1791 the last piece of the Hamiltonian scheme, the bank, had been passed by Congress and now only required the presidential signature.
But the bank proved to be the one controversial issue that Washington could not completely delegate to Hamilton. As a symbol it was every bit as threatening, as palpable an embodiment of federal power, as a sovereign Supreme Court. As part of a last-ditch campaign to scuttle the bank, the three Virginians within Washington’s official family mobilized to attack it on constitutional grounds. Jefferson, Madison, and Randolph submitted separate briefs, all arguing that the power to create a corporation was nowhere specified by the Constitution and that the Tenth Amendment clearly stated that powers not granted to the federal government were retained by the states. Before rendering his own verdict, Washington sent the three negative opinions to Hamilton for rebuttal. His response, which exceeded 13,000 words, became a landmark in American legal history, arguing that the “necessary and proper” clause of the Constitution (Article 1, Section 8) granted implied powers to the federal government beyond the explicit powers specified in the document. Though there is some evidence that Washington was wavering before Hamilton delivered his opinion, it was not the brilliance of the opinion that persuaded him. Rather, it provided the legal rationale he needed to do what he had always wanted to do. For the truth was that Washington was just as much an economic nationalist as Hamilton, a fact that Hamilton’s virtuoso leadership throughout the yearlong debate had conveniently obscured.
As both a symbolic political centerpiece and a deft delegator of responsibility, Washington managed to levitate above the political landscape. That was his preferred position, personally because it made his natural aloofness into an asset, politically because it removed the Presidency from the partisan battles on the ground. In three policy areas, however—the location of the national capital, foreign policy, and Indian affairs—he reverted to the kind of meticulous personal management he had pursued at Mount Vernon.
What was called “the residence question” had its origins in a provision of the Constitution mandating Congress to establish a “seat of government” without specifying the location. By the spring of 1790 the debates in Congress had deteriorated into a comic parody on the gridlock theme. Sixteen different sites had been proposed, then rejected, as state and regional voting blocs mobilized against each alternative in order to preserve their own preferences. One frustrated congressman suggested that perhaps they should put the new capital on wheels and roll it from place to place. An equally frustrated newspaper editor observed that “since the usual custom is for the capital of new empires to be selected by the whim or caprice of a despot,” and since Washington “had never given bad advice to his country,” why not “let him point to a map and say ‘here’?”
That is not quite how the Potomac site emerged victorious. Madison had been leading the fight in the House for a Potomac location, earning the nickname “Big Knife” for cutting deals to block the other alternatives. (One of Madison’s most inspired arguments was that the geographic midpoint of the nation on a north-south axis was not just the mouth of the Potomac, but Mount Vernon itself, a revelation of providential proportions.) Eventually a private bargain was struck over dinner at Jefferson’s apartment, subsequently enshrined in lore as the most consequential dinner party in American history, where Hamilton agreed to deliver sufficient votes from several Northern states to clinch the Potomac location in return for Madison’s pledge to permit passage of Hamilton’s assumption bill. Actually, there were multiple behind-the-scenes bargaining sessions going on at the same time, but the notion that an apparently intractable political controversy could be resolved by a friendly conversation over port and cigars has always possessed an irresistible narrative charm. The story also conjured up the attractive picture of brotherly cooperation within his official family that Washington liked to encourage.
Soon after the Residence Act designating a Potomac location passed, in July of 1790, that newspaper editor’s suggestion (give the whole messy question to Washington) became fully operative. Jefferson feared that the Potomac site would be sabotaged if the endless management details for developing a city from scratch were left to Congress. So he proposed a thoroughly imperial solution: Bypass Congress altogether by making all subsequent decisions about architects, managers, and construction schedules an Executive responsibility, “subject to the President’s direction in every point.”
And so they were. What became Washington, D.C., was aptly named, for while the project had many troops involved in its design and construction, it had only one supreme commander. He selected the specific site on the Potomac between Rock Creek and Goose Creek, while pretending to prefer a different location to hold down the purchase price for the lots. He appointed the commissioners, who reported directly to him rather than to Congress. He chose Pierre L’Enfant as chief architect, personally endorsing L’Enfant’s plan for a huge tract encompassing nine and a half square miles and thereby rejecting Jefferson’s preference for a small village that would gradually expand in favor of a massive area that would gradually fill up. When L’Enfant’s grandiose vision led to equivalently grandiose demands—he refused to take orders from the commissioners and responded to one stubborn owner of a key lot by blowing up his house—Washington fired him. He approved the sites for the presidential mansion and the Capitol as well as the architects who designed them. All in all, he treated the nascent national capital as a public version of his Mount Vernon plantation, right down to the supervision of the slave labor force that did much of the work.
It helped that the construction site was located near Mount Vernon, so he could make regular visits to monitor progress on his trips home from the capital in Philadelphia. It also helped that Jefferson and Madison could confer with him at the site on their trips back to Monticello and Montpelier. At a time when both Virginians were leading the opposition to Hamilton’s financial program, their cooperation on this ongoing project served to bridge the widening chasm within the official family over the Hamiltonian vision of federal power. However therapeutic the cooperation, it belied a fundamental disagreement over the political implications of their mutual interests in the Federal City, as it was then called. For Jefferson and Madison regarded the Potomac location of the permanent capital as a guarantee of Virginia’s abiding hegemony within the Union, as a form of geographic assurance, if you will, that the government would always speak with a Southern accent. Washington thought more expansively, envisioning the capital as a focusing device for national energies that would overcome regional jealousies, performing the same unifying function geographically that he performed symbolically. His personal hobbyhorse became a national university within the capital, where the brightest young men from all regions could congregate and share a common experience as Americans that helped to “rub off” their sectional habits and accents.
His hands-on approach toward foreign policy was only slightly less direct than his control of the Potomac project, and the basic principles underlying Washington’s view of the national interest were present from the start. Most elementally, he was a thoroughgoing realist. Though he embraced republican ideals, he believed that the behavior of nations was driven not by ideals but by interests. This put him at odds ideologically and temperamentally with his Secretary of State, since Jefferson was one of the most eloquent spokesmen for the belief that American ideals were American interests. Jefferson’s recent experience in Paris as a witness to the onset of the French Revolution had only confirmed his conviction that a global struggle on behalf of those ideals had just begun and that it had a moral claim on American support. Washington was pleased to receive the key to the Bastille from Lafavette; he also knew as well as or better than anyone else that the victory over Great Britain would have been impossible without French economic and military assistance. But he was determined to prevent his warm memories of Rochambeau’s soldiers and de Grasse’s ships at Yorktown from influencing his judgment about the long-term interests of the United States.
Those interests, he was convinced, did not lie across the Atlantic but across the Alleghenies. The chief task, as Washington saw it, was to consolidate control of the North American continent east of the Mississippi. Although Jefferson had never been west of the Blue Ridge Mountains, he shared Washington’s preference for Western vistas. (During his own Presidency Jefferson would do more than anyone to expand those vistas beyond the Mississippi to the Pacific.)
Tight presidential control over foreign policy was unavoidable at the start because Jefferson did not come on board until March of 1790. Washington immediately delegated all routine business to him but preserved his own private lines of communication on French developments, describing reports of escalating bloodshed he received from Paris “as if they were the events of another planet.” His cautionary posture toward revolutionary France received reinforcement from Gouverneur Morris, a willfully eccentric and thoroughly irreverent American in Paris whom Washington cultivated as a correspondent. Morris described France’s revolutionary leaders as “a Fleet at Anchor in the fog,” and he dismissed as a hopelessly romantic illusion Jefferson’s view that a Gallic version of 1776 was under way. The American Revolution, Morris observed, had been guided by experience and light, while the French were obsessed with experiment and lightning.
Washington’s supervisory style, as well as his realistic foreign-policy convictions, was put on display when a potential crisis surfaced in the summer of 1790. A minor incident involving Great Britain and Spain in Nootka Sound (near modern-day Vancouver) prompted a major appraisal of American national interests. The British appeared poised to use the incident to launch an invasion from Canada down the Mississippi, to displace Spain as the dominant European power in the American West. This threatened to change the entire strategic chemistry on the continent and raised the daunting prospect of another war with Great Britain.
Washington convened his Cabinet in Executive session, thereby making clear for the first time that the Cabinet and not the more cumbersome Senate would be his advisory council on foreign policy. He solicited written opinions from all the major players, including Adams, Hamilton, Jay, Jefferson, and Knox. The crisis fizzled away when the British decided to back off, but during the deliberations two revealing facts became clearer, first that Washington was resolved to avoid war at any cost, convinced that the fragile American Republic was neither militarily nor economically capable of confronting the British leviathan at this time, and second that Hamilton’s strategic assessment, not Jefferson’s, was more closely aligned with his own, which turned out to be a preview of coming attractions.
Strictly speaking, the federal government’s relations with the Native American tribes were also a foreign-policy matter. From the start, however, with Jefferson arriving late on the scene, Indian affairs came under the authority of the Secretary of War. As ominous as this might appear in retrospect, Knox took responsibility for negotiating the disputed terms of several treaties approved by the Confederation Congress. For both personal and policy reasons Washington wanted his own hand firmly on this particular tiller, and his intimate relationship with Knox assured a seamless coordination guided by his own judgment. He had been present at the start of the struggle for control of the American interior, and he regarded the final fate of the Indian inhabitants as an important piece of unfinished business that must not be allowed to end on a tragic note.
At the policy level, if America’s future lay to the west, as Washington believed, it followed that the region between the Alleghenies and the Mississippi merited Executive attention more than the diplomatic doings in Europe. Knox estimated that about 76,000 Indians lived in the region, about 20,000 of them warriors, which meant that venerable tribal chiefs like Cornplanter and Joseph Brant deserved more cultivation as valuable allies than did heads of state across the Atlantic. At the personal level, Washington had experienced Indian power firsthand. As commander of the Virginia Regiment during the French and Indian War, he saw Native Americans not as exotic savages but as familiar and formidable adversaries fighting for their own independence, behaving pretty much as he would do in their place. Moreover, the letters the new President received from several tribal chiefs provided poignant testimony that they now regarded him as their personal protector. “Brother,” wrote one Cherokee chief, “we give up to our white brothers all the land we could any how spare, and have but little left. . . and we hope you wont let any people take any more from us without our consent. We are neither Birds nor Fish; we can neither fly in the air nor live under water. . . .We are made by the same hand and in the same shape as yourselves.”
Such pleas did not fall on deaf ears. Working closely with Knox, Washington devised a policy designed to create several sovereign Indian “homelands.” He concurred when Knox insisted that “the independent tribes of indians ought to be considered as foreign nations, not as the subjects of any particular State.” Treaties with these tribes ought to be regarded as binding contracts with the federal government, whose jurisdiction could not be compromised: “Indians being the prior occupants possess the right of the Soil. . . . To dispossess them . . . would be a gross violation of the fundamental Laws of Nature and of that distributive Justice which is the glory of a nation.” A more coercive policy of outright confiscation, Washington believed, would constitute a moral failure that “would stain the character of the nation.” He sought to avoid the outcome—Indian removal—that occurred more than 40 years later under Andrew Jackson. Instead, he envisioned multiple sanctuaries under tribal control that would be bypassed by the surging wave of white settlers and whose occupants would gradually, over the course of the next century, become assimilated as full-fledged American citizens.
Attempting to make this vision a reality occupied more of Washington’s time and energy than any other foreign or domestic issue during his first term. Success depended on finding leaders willing to negotiate yet powerful enough to impose a settlement on other tribes. Knox and Washington found a charismatic Creek chief of mixed blood named Alexander McGillivray, a literate man whose diplomatic skills and survival instincts made him the Indian version of France’s Talleyrand, and in the summer of 1790 Washington hosted McGillivray and 26 chiefs for several weeks of official dinners, parades, and diplomatic ceremonies more lavish than any European delegation enjoyed. (McGillivray expected and received a personal bribe of $1,200 a year to offset the bribe the Spanish were already paying him not to negotiate with the Americans.) Washington and the chiefs locked arms in Indian style and invoked the Great Spirit, and then the chiefs made their marks on the Treaty of New York, redrawing the borders for a sovereign Creek Nation. Washington reinforced the terms of the treaty by issuing the Proclamation of 1790, an Executive Order forbidding private or state encroachments on all Indian lands guaranteed by treaty with the United States.
But the President soon found that it was one thing to proclaim and quite another to sustain. The Georgia legislature defied the proclamation by making a thoroughly corrupt bargain to sell more than 15 million acres on its western border to speculators calling themselves the Yazoo Companies, thereby rendering the Treaty of New York a worthless piece of paper. In the northern district above the Ohio, no equivalent to McGillivray could be found, mostly because the Six Nations, which Washington could remember as a potent force in the region, had been virtually destroyed in the War for Independence and could no longer exercise hegemony over the Ohio Valley tribes.
Washington was forced to approve a series of military expeditions into the Ohio Valley to put down uprisings by the Miamis, Wyandots, and Shawnees, even though he believed that the chief culprits were white vigilante groups determined to provoke hostilities. The Indian side of the story, he complained, would never make it into the history books: “They, poor wretches, have no press thro’ which their grievances are related; and it is well known, that when one side only of a Story is heard, and often repeated, the human mind becomes impressed with it, insensibly.” Worse still, the expedition commanded by Arthur St. Clair was virtually annihilated in the fall of 1791—reading St. Clair’s battle orders is like watching Custer prepare for the Little Bighorn—thereby creating white martyrs and provoking congressional cries for reprisals in what had become an escalating cycle of violence that defied Washington’s efforts at conciliation.
Eventually the President was forced to acknowledge that his vision of secure Indian sanctuaries could not be enforced. “I believe scarcely any thing short of a Chinese wall,” he lamented, “will restrain Land jobbers and the encroachment of settlers upon the Indian country.” Knox concurred, estimating that federal control on the frontier would require an arc of forts from Lake Erie to the Gulf of Mexico, garrisoned by no less than 50,000 troops. This was a logistical, economic, and political impossibility. Washington’s vision of peaceful coexistence also required that federal jurisdiction over the states as the ultimate guarantor of all treaties be recognized as supreme, which helps explain why he was so passionate about the issue, but also why it could never happen. If a just accommodation with the Native American populations was the major preoccupation of his first term, it was also the singular failure.
By the spring of 1792, then, what Washington had imagined as a brief caretaker Presidency with mostly ceremonial functions had grown into a judicious but potent projection of Executive power. The Presidency so vaguely defined in the Constitution had congealed into a unique synthesis of symbolism and substance, its occupant the embodiment of that work in progress called the United States and the chief magistrate with supervisory responsibility for all domestic and foreign policy, in effect an elected king and prime minister rolled into one. There was a sense at the time, since confirmed by most historians of the Presidency, that no one else could have managed this political evolution so successfully, indeed that under anyone else the experiment with republican government would probably have failed at the start. Eventually the operation of the federal government under the Constitution would be described as “a machine that ran itself.” At the outset, however, the now venerable checks and balances of the Constitution required a trusted leader who had internalized checks and balances sufficiently to understand both the need for Executive power and the limitations of its effectiveness. He made the Presidency a projection of himself.
Washington tried to step down after those first four years and, perhaps predictably, failed. His second term was increasingly full of rancor, with dramatic developments in Europe and mounting tensions between Jefferson and Hamilton within his Cabinet that together threatened to destroy all he had accomplished. But fierce though these conflicts were, they weren’t powerful enough to destroy the foundation that Washington had built, and they haven’t managed to yet. | 3.044294 |
The life-giving ideas of chemistry are not reducible to physics. Or, if one tries to reduce them, they wilt at the edges, lose not only much of their meaning, but interest too. And, most importantly, they lose their chemical utility—their ability to relate seemingly disparate compounds to each other, their fecundity in inspiring new experiments. I'm thinking of concepts such as the chemical bond, a functional group and the logic of substitution, aromaticity, steric effects, acidity and basicity, electronegativity and oxidation-reduction. As well as some theoretical ideas I've been involved in personally—through-bond coupling, orbital symmetry control, the isolobal analogy.
Consider the notion of oxidation state. If you had to choose two words to epitomize the same-and-not-the-same nature of chemistry, would you not pick ferrous and ferric? The concept evolved at the end of the 19th century (not without confusion with "valency"), when the reality of ions in solution was established. As did a multiplicity of notations—ferrous iron is iron in an oxidation state of +2 (or is it 2+?) or Fe(II). Schemes for assigning oxidation states (sometimes called oxidation numbers) adorn every introductory chemistry text. They begin with the indisputable: In compounds, the oxidation states of the most electronegative elements (those that hold on most tightly to their valence electrons), oxygen and fluorine for example, are –2 and –1, respectively. After that the rules grow ornate, desperately struggling to balance wide applicability with simplicity.
The oxidation-state scheme had tremendous classificatory power (for inorganic compounds, not organic ones) from the beginning. Think of the sky blue color of chromium(II) versus the violet or green of chromium(III) salts, the four distinctly colored oxidation states of vanadium. Oliver Sacks writes beautifully of the attraction of these colors for a boy starting out in chemistry. And not only boys.
But there was more to oxidation states than just describing color. Or balancing equations. Chemistry is transformation. The utility of oxidation states dovetailed with the logic of oxidizing and reducing agents—molecules and ions that with ease removed or added electrons to other molecules. Between electron transfer and proton transfer you have much of reaction chemistry.
I want to tell you how this logic leads to quite incredible compounds, but first let's look for trouble. Not for molecules—only for the human beings thinking about them.
Those Charges are Real, Aren't They?
Iron is not only ferrous or ferric, but also comes in oxidation states ranging from +6 (in BaFeO4) to –2 (in Fe(CO)42–, a good organometallic reagent).
Is there really a charge of +6 on the iron in the first compound and a –2 charge in the carbonylate? Of course not, as Linus Pauling told us in one of his many correct (among some incorrect) intuitions. Such large charge separation in a molecule is unnatural. Those iron ions aren't bare—the metal center is surrounded by more or less tightly bound "ligands" of other simple ions (Cl– for instance) or molecular groupings (CN–, H2O, PH3, CO). The surrounding ligands act as sources or sinks of electrons, partly neutralizing the formal charge of the central metal atom. At the end, the net charge on a metal ion, regardless of its oxidation state, rarely lies outside the limits of +1 to –1.
Actually, my question should have been countered critically by another: How do you define the charge on an atom? A problem indeed. A Socratic dialogue on the concept would bring us to the unreality of dividing up electrons so they are all assigned to atoms and not partly to bonds. A kind of tortured pushing of quantum mechanical, delocalized reality into a classical, localized, electrostatic frame. In the course of that discussion it would become clear that the idea of a charge on an atom is a theoretical one, that it necessitates definition of regions of space and algorithms for divvying up electron density. And that discussion would devolve, no doubt acrimoniously, into a fight over the merits of uniquely defined but arbitrary protocols for assigning that density. People in the trade will recognize that I'm talking about "Mulliken population analysis" or "natural bond analysis" or Richard Bader's beautifully worked out scheme for dividing up space in a molecule.
What about experiment? Is there an observable that might gauge a charge on an atom? I think photoelectron spectroscopies (ESCA or Auger) come the closest. Here one measures the energy necessary to promote an inner-core electron to a higher level or to ionize it. Atoms in different oxidation states do tend to group themselves at certain energies. But the theoretical framework that relates these spectra to charges depends on the same assumptions that bedevil the definition of a charge on an atom.
An oxidation state bears little relation to the actual charge on the atom (except in the interior of the sun, where ligands are gone, there is plenty of energy, and you can have iron in oxidation states up to +26). This doesn't stop the occasional theoretician today from making a heap of a story when the copper in a formal Cu(III) complex comes out of a calculation bearing a charge of, say, +0.51.
Nor does it stop oxidation states from being just plain useful. Many chemical reactions involve electron transfer, with an attendant complex of changes in chemical, physical and biological properties. Oxidation state, a formalism and not a representation of the actual electron density at a metal center, is a wonderful way to "bookkeep" electrons in the course of a reaction. Even if that electron, whether added or removed, spends a good part of its time on the ligands.
But enough theory, or, as some of my colleagues would sigh, anthropomorphic platitudes. Let's look at some beautiful chemistry of extreme oxidation states.
Incredible, But True
Recently, a young Polish postdoctoral associate, Wojciech Grochala, led me to look with him at the chemical and theoretical design of novel high-temperature superconductors. We focused on silver (Ag) fluorides (F) with silver in oxidation states II and III. The reasoning that led us there is described in our forthcoming paper. For now let me tell you about some chemistry that I learned in the process. I can only characterize this chemistry as incredible but true. (Some will say that I should have known about it, since it was hardly hidden, but the fact is I didn't.)
Here is what Ag(II), unique to fluorides, can do. In anhydrous HF solutions it oxidizes Xe to Xe(II), generates C6F6+ salts from perfluorobenzene, takes perfluoropropylene to perfluoropropane, and liberates IrF6 from its stable anion. These reactions may seem abstruse to a nonchemist, but believe me, it's not easy to find a reagent that would accomplish them.
Ag(III) is an even stronger oxidizing agent. It oxidizes MF6– (where M=Pt or Ru) to MF6. Here is what Neil Bartlett at the University of California at Berkeley writes of one reaction: "Samples of AgF3 reacted incandescently with metal surfaces when frictional heat from scratching or grinding of the AgF3 occurred."
Ag(II), Ag(III) and F are all about equally hungry for electrons. Throw them one, and it's not at all a sure thing that the electron will wind up on the fluorine to produce fluoride (F–). It may go to the silver instead, in which case you may get some F2 from the recombination of F atoms.
Not that everyone can (or wants to) do chemistry in anhydrous HF, with F2 as a reagent or being produced as well. In a recent microreview, Thomas O'Donnell says (with some understatement), "... this solvent may seem to be an unlikely choice for a model solvent system, given its reactivity towards the usual materials of construction of scientific equipment." (And its reactivity with the "materials of construction" of human beings working with that equipment!) But, O'Donnell goes on to say, "... with the availability of spectroscopic and electrochemical equipment constructed from fluorocarbons such as Teflon and Kel-F, synthetic sapphire and platinum, manipulation of and physicochemical investigation of HF solutions in closed systems is now reasonably straightforward."
For this we must thank the pioneers in the field—generations of fluorine chemists, but especially Bartlett and Boris Zemva of the University of Ljubljana. Bartlett reports the oxidation of AgF2 to AgF4– (as KAgF4) using photochemical irradiation of F2 in anhydrous HF (made less acidic by adding KF to the HF). And Zemva used Kr2+ (in KrF2) to react with AgF2 in anhydrous HF in the presence of XeF6 to make XeF5+AgF4–. What a startling list of reagents!
To appreciate the difficulty and the inspiration of this chemistry, one must look at the original papers, or at the informal letters of the few who have tried it. You can find some of Neil Bartlett's commentary in the article that Wojciech and I wrote, and in an interview with him.
Charge It, Please
Chemists are always changing things. How to tune the propensity of a given oxidation state to oxidize or reduce? One way to do it is by changing the charge on the molecule that contains the oxidizing or reducing center. The syntheses of the silver fluorides cited above contain some splendid examples of this strategy. Let me use Bartlett's words again, just explaining that "electronegativity" gauges in some rough way the tendency of an atom to hold on to electrons. (High electronegativity means the electron is strongly held, low electronegativity that it is weakly held.)
It's easy to make a high oxidation state in an anion because an anion is electron-rich. The electronegativity is lower for a given oxidation state in an anion than it is in a neutral molecule. That in turn, is lower than it is in a cation. If I take silver and I expose it to fluorine in the presence of fluoride ion, in HF, and expose it to light to break of F2 to atoms, I convert the silver to silver(III), AgF4-. This is easy because the AG(III) is in an anion. I can then pass in boron trifluoride and precipitate silver trifluoride, which is now a much more potent oxidizer than AgF4- because the electronegativity in the neutral AgF3 is much higher than it is in the anion. If I can now take away a fluoride ion, and make a cation, I drive the electronegativity even further up. With such a cation, for example, AgF2+, I can steal the electron from PtF6- and make PtF6.... This is an oxidation that even Kr(II) is unable to bring about.
Simple, but powerful reasoning. And it works.
A World Record?
Finally, a recent oxidation-state curiosity: What is the highest oxidation state one could get in a neutral molecule? Pekka Pyykkö and coworkers suggest cautiously, but I think believably, that octahedral UO6, that is U(XII), may exist. There is evidence from other molecules that uranium 6p orbitals can get involved in bonding, which is what they would have to do in UO6.
What wonderful chemistry has come—and still promises to come—from the imperfect logic of oxidation states!
© Roald Hoffmann
I am grateful to Wojciech Grochala, Robert Fay and Debra Rolison for corrections and comments. Thanks to Stan Marcus for suggesting the title of this column. | 3.050298 |
Found throughout tropical regions of Africa, the Emperor Scorpion is one of the largest in the scorpion family. A predatory carnivore their diet ranges from insects to small mammals. In captivity we feed them live crickets, meal worms or morio worms about once a week.
- Live up to 8 years.
- With an exoskeleton which they molt once a year to enable growth.
- Adult scorpions can grow to 15 centimetres in across.
- Habitat being rain forest floor.
- Nocturnal arachnids.
Although it may look very dangerous, generally speaking the larger scorpions tend to be less poisonous than the smaller ones. Although it would still be painful if we were to get stung, it would not be life threatening. | 3.072722 |
For many, 1066 is the date when the Middle Ages began. Centuries of castles, cathedrals and churches followed, busy with chivalry, the Crusades and crop-rotation, all ending some time around 1500.
This, of course, is an over-simplification, just as the term Middle Ages itself is. For a long time, the civilisations of the Romans and the Renaissance were admired; everything in between – the ages in the middle - was regarded as inferior, a period of decline, disease and instability. Only with the Victorians was there some attempt to reconsider these centuries. They, like us, were transfixed by the imaginative leaps of medieval buildings and their intense spirituality.
Certain themes dominate medieval architecture. First, the church was central to everyday life. Usually the most impressive building in the neighbourhood was the parish church, and the finest buildings created were the great stone cathedrals. Secondly, society was strictly ordered. For most of the Middle Ages, the hierarchy of the Feudal System dominated: the majority were poor peasants living in simple dwellings that have long disappeared. A few, the lords and clergy, were rich. Their castles, manor houses, monasteries and colleges by comparison were splendid constructions, and have survived in some form. Thirdly, although technology was limited, building methods and styles did evolve. Throughout the Gothic style dominated, but in a myriad of forms. | 3.824316 |
Duke’s Charter, 1664
By this charter, King Charles II of England granted land that includes present-day New York, New Jersey, most of Maine, and parts of Connecticut and Pennsylvania to his brother James, Duke of York (later James II, King of England). The charter, or royal patent, was awarded on March 12, 1664, and sets up a proprietary colony. It gives James authority to send an armed force to compel the Dutch surrender of the New Netherland province to the English, and allows him to delegate the administration of matters of law, trade, rebellion, and defense in the colony.
New York State Archives [Series B1371, Charter of the proprietary colony from Charles II to the Duke of York, 1664] | 3.486014 |
Diagnosing Mesothelioma: MRIs
Magnetic resonance imaging (MRI) is one of several imaging techniques that doctors use to detect, stage and evaluate the progression of mesothelioma.
These non-invasive scans use magnets and radio waves to help doctors visualize a patient’s organs, tissues, bones and tumors. Many radiologists consider MRIs ideal for viewing the anatomical structures of the chest and abdomen – including the pleura and peritoneum, where mesothelioma tumors most commonly develop.
The first commercial MRI units emerged in the 1980s. Since then, the technology has advanced considerably. Modern MRI units use superconductor magnets and coils to produce a constant magnetic field, as well as radiofrequency energy to measure signals from the nuclei of hydrogen atoms inside the body. Computers inside the scanner register these signals and turn them into images. Most modern units also include a shield, which prevents the scanner from picking up interference from outside signals sources like televisions and radio stations.
MRIs work by aligning the water molecules in your body. Radio waves then cause these aligned particles to emit signals, which register on the scanner. The images reflect the amount of activity that is occurring in each internal structure. Each MRI-generated image shows a thin slice of the body.
The MRI Process
The entire MRI process takes about an hour or two to complete. The scan itself takes between 30 and 60 minutes, but the appointment also includes pre-scan positioning and other preparation activities.
Once patients arrive at the MRI center and fill out their paperwork, they must remove all metal objects from their bodies. The strong magnets in an MRI scanner can attract zippered clothing, jewelry, watches, belts, keys and credit cards. Implanted medical devices that contain metal may also cause complications with the scan. Next, patients will put on a hospital gown and ear plugs.
Fast FactSome MRI scans use a contrast dye to improve image detail. The most common dye is gadolinium, which is safe and effective for most patients with properly functioning kidneys. This magnetic metal ion can visually enhance lesions on MRI images, indicating growths that may be mesothelioma tumors.
For contrast-enhanced MRIs, patients will receive an injection of a contrast dye. This makes certain areas show up more clearly on the test. After the injection, patients lie down on the imaging table.
The technologist then arranges a coil around the part of the body that is being imaged. For pleural mesothelioma, this will be the chest. For peritoneal mesothelioma, it will be the abdomen. Once the technician positions the patient’s body correctly for the exam, the patient and table are slid into a tube-like opening in the MRI machine.
During the scan, the machine makes repetitive knocking sounds as the magnetic field gradients turn on and off. The test itself is painless. Patients should try not to move during the scan, but they can communicate with their technician via microphone if they feel scared or claustrophobic. Patients can leave the MRI center immediately after the scan.
A post-processing technologist will then highlight abnormal areas on the images. Once the final images are ready for review, a radiologist interprets the results and provides the patient’s primary doctor with a report. From there, the physician can examine the scan on a computer monitor, send them electronically to the rest of the treatment team or print them out for the patient’s medical records.
MRI Side Effects
Patients occasionally experience minor side effects after an MRI scan.
|Magnetophosphenes (brief flashes of light across the retina)||Vertigo/Dizziness||Metallic taste in the mouth||Nausea||Physical burns or burning sensations (extremely rare)|
MRIs do not place patients at risk for radiation-induced damage. Because MRIs do not use ionizing radiation, most doctors prefer MRIs for patients who need routine imaging scans. The U.S. Food and Drug Administration (FDA) concludes that as long as the field strengths are kept below 2.0 Tesla, MRIs are safe for repeated use.
MRIs for Diagnosing Mesothelioma Tumors
Magnetic resonance imaging currently plays a limited role in diagnosing mesothelioma. When doctors do prescribe MRI scans to diagnose the disease, they often use them to complement CT scan results. MRI-generated images can help differentiate between normal tissue and tumor tissue, which cannot be determined with a CT scan alone.
MRI scans produce a visual representation of differences is signal intensity between cancerous and noncancerous tissues. Because cancerous tissues emit more intense signals than surrounding healthy tissue, malignant mesothelioma tissues appear as white spots on the scan results with varying brightness. The difference between malignant and noncancerous tissue is even more pronounced in contrast-enhanced MRIs.
To arrive at a mesothelioma diagnosis, radiologists usually inspect MRI-generated images for a mass on the pleura, which encases the lungs. These masses often emit signals of intermediate intensity. The fluid located between the lungs and pleura can also indicate mesothelioma, as areas of pleural fluid with very intense signals sometimes surround pleural masses. MRI scans are generally superior to CT scans for characterizing pleural fluid as benign or malignant.
|Chest wall infiltration||Mediastinal pleural involvement||Circumferential pleural thickening||Nodularity||Other irregular changes in pleural tissue|
Features such as bilateral pleural involvement, pleural shrinkage, pleural effusions and pleural calcifications may also show up on MRI-generated images. These features may suggest mesothelioma, but cannot be used to make a definitive diagnosis.
MRIs for Staging Mesothelioma Tumors
Most studies indicate that MRIs and CT scans are equally effective for accurately staging malignant mesothelioma tumors. While MRIs are less effective at detecting lymph node involvement, they are generally superior at detecting the extent of a tumor’s invasion of other local structures – one of the key steps in staging a mesothelioma tumor.
When radiologists use MRIs to stage a mesothelioma tumor, they look for the following features:
- Loss of normal fat planes
- Extension into mediastinal fat
- Tumor growth that encases more than half the circumference of an organ or mediastinal structure
Radiologists can exclude patients as good surgical candidates if an MRI scan shows mediastinal or full-thickness pericardial involvement, diffuse or multifocal chest wall disease or involvement of the diaphragm or spine.
By revealing the stage of a mesothelioma tumor, MRI images can help doctors determine whether or not the patient is a good candidate for invasive surgery. MRIs are especially useful for detecting two primary features of patients who are unlikely to benefit from an aggressive operation: chest wall invasion and involvement of the diaphragm.
In one study, MRIs detected diaphragmatic spread with 82 percent accuracy, while CT scans detected the same condition with only 55 percent accuracy.
MRIs are useful for staging mesothelioma with the TNM system. Some studies suggest that MRI scans can differentiate between T3 and T4 disease, but not earlier stages like T1 and T2. One study found that MRIs understaged half of the mesothelioma tumors by failing to detect pericardial invasion, which advances tumors from stage T2 to stage T3. However, the same MRIs were effective at detecting involvement of the internal pericardium, which also advances tumors from stage T3 to T4. The study correctly identified all of the tumors that were stage T3 or lower (while excluding the T4 tumors) with a positive predictive value of 100 percent.
MRIs for Evaluating Response to Treatment
Oncologists consider the MRI an accurate and reproducible technique for evaluating patient response to mesothelioma treatment. When evaluating the MRI scan results of mesothelioma patients undergoing treatment, radiologists often measure the tumor from several separate sites. This helps account for the rind-like growth pattern of the cancer. The primary measurement that the doctors look for is an increase or decrease in pleural thickness.
Fast FactIn one study of 50 mesothelioma patients, MRI scans correctly categorized the tumor response in 92 percent of patients.
If there is no visible disease on the post-treatment imaging scan, doctors call this complete response. If there is a 30 percent decrease in the sum of linear tumor measurements, they generally refer to that as a partial response to treatment. If the MRI indicates a size increase of at least 20 percent (or shows a newly developed lesion), the disease is considered progressive.
Doctors may prescribe lung spirometry tests when using MRIs as another way to evaluate treatment response. Patients whose MRIs indicate a partial or complete response to mesothelioma treatment often display simultaneous improvements in lung function, which can be measured with a spirometer.
When doctors study MRI results to determine treatment response, they can adjust their patient’s prognosis accordingly. In one study, patients whose MRIs indicated a response to therapy had a median survival of 15.1 months, while patients whose MRIs indicated no response had a median survival of only 8.9 months. | 3.353915 |
Are omega-3 polyunsaturated fatty acids derived from food sources other than fish as effective as the ones that are derived from fish? In a recent review in the Journal of Lipid Research, researchers from Oregon State University set out to assess the scientific data we have available to answer that question.
The review article by Donald B. Jump, Christopher M. Depner and Sasmita Tripathy was part of a thematic series geared toward identifying new lipid and lipoprotein targets for the treatment of cardiometabolic diseases.
Interest in the health benefits of omega-3 PUFA stemmed from epidemiological studies on Greenland Inuits in the 1970s that linked reduced rates of myocardial infarction (compared with rates among Western populations) to a high dietary intake of fish-derived omega-3 PUFA. Those studies have spurred hundreds of others attempting to unravel the effects of omega-3 PUFA on cardiovascular disease and its risk factors.
|The omega-3 polyunsaturated fatty acid (PUFA) conversion pathway.
Omega-3 in the diet
Fish-derived sources of omega-3 PUFA are eicosapentaenoic acid, docosapentaenoic acid and docosahexaenoic acid. These fatty acids can be found in nutritional supplements and foods such as salmon, anchovies and sardines.
Plant-derived sources of omega-3 PUFA are alpha-linolenic acid and stearidonic acid. Alpha-linolenic acid is an essential fatty acid. It cannot be synthesized in the body, so it is necessary to get it from dietary sources, such as flaxseed, walnuts, canola oil and chia seeds. The overall levels of fatty acids in the heart and blood are dependent on the metabolism of alpha-linolenic acid in addition to other dietary sources.
The heart of the matter
A study in 2007 established that dietary supplementation of alpha-linolenic acid had no effect on myocardial levels of eicosapentaenoic acid or docosahexaenoic acid, and it did not significantly increase their content in cardiac muscle (3). Furthermore, alpha-linolenic acid intake had no protective association with the incidence of coronary heart disease, heart failure, atrial fibrillation or sudden cardiac death (4, 5, 6). In general, it did not significantly affect the omega-3 index, an indicator of cardioprotection (3).
Why doesn’t supplementation of ALA affect the levels of fatty acids downstream in the biochemical pathway (see figure)? The data seem to point to the poor conversion of the precursor ALA to DHA, the end product of the omega-3 PUFA pathway.
DHA is assimilated into cellular membrane phospholipids and is also converted to bioactive fatty acids that affect several signaling mechanisms that control cardiac and vascular function.
According to Jump, “One of the issues with ALA is that it doesn’t get processed very well to DHA.” This is a metabolic problem that involves the initial desaturation step in the pathway, which is catalyzed by the fatty acid desaturase FADS2.
Investigators have explored ways to overcome the metabolic bottleneck created by this rate-limiting step.
One approach involves increasing stearidonic acid in the diet, Jump says, because FADS2 converts ALA to SDA. While studies have shown that increasing SDA results in significantly higher levels of downstream EPA and DPA in blood phospholipids, blood levels of DHA were not increased (7).
FADS2 also is required for DHA synthesis at the other end of the pathway, where it helps produce a DHA precursor.
Consumption of EPA and DHA from fish-derived oil has been reported to increase atrial and ventricular EPA and DHA in membrane phospholipids (3), and heart disease patients who consumed EPA and DHA supplements had a reduction in coronary artery disease and sudden cardiac death (8).
“Based on the prospective cohort studies and the clinical studies,” Jump says, “ALA is not viewed as that cardioprotective.”
He continues, “It is generally viewed that EPA and DHA confer cardioprotection. Consumption of EPA and DHA are recommended for the prevention of cardiovascular diseases. The question then comes up from a metabolic perspective: Can these other sources of omega-3 PUFA, like ALA, be converted to DHA? Yes, they can, but they’re not as effective as taking an EPA- or DHA-containing supplement or eating fish containing EPA and DHA.” (Nonfish sources of EPA from yeast and DHA from algae are commercially available.)
It’s important to note that omega-3 PUFAs are involved in a variety of biological processes, including cognitive function, visual acuity and cancer prevention. The molecular and biochemical bases for their effects on those systems are complex and not well understood.
“These are very busy molecules; they do a lot,” Jump says. “They regulate many different pathways, and that is a problem in trying to sort out the diverse actions these fatty acids have on cells. Even the area of heart function is not fully resolved. While there is a reasonable understanding of the impact of these fatty acids on inflammation, how omega-3 fatty acids control cardiomyocyte contraction and energy metabolism is not well understood. As such, more research is needed.”
Elucidating the role of omega-3s in the heart: the next step
At the University of Maryland, Baltimore, a team led by William Stanley has made strides toward elucidating the role of PUFAs in heart failure.
Stanley’s research group focuses on the role of substrate metabolism and diet in the pathophysiology of heart failure and recently identified the mitochondrial permeability transition pore as a target for omega-3 PUFA regulation (9). The group is very interested in using omega-3 PUFAs to treat heart failure patients who typically have a high inflammatory state and mitochondrial dysfunction in the heart.
“It seems to be that DHA is really the one that is effective at generating resistance to stress-induced mitochondrial pore opening,” which is implicated in ischemic injury and heart failure (10), Stanley says. “It also seems to be that you’ve got to get the DHA in the membranes. You have to ingest it. That’s the bottom line.”
Stanley points out that ingesting DHA in a capsule form makes major diet changes unnecessary: “You can just take three or four capsules a day, and it can have major effects on the composition of cardiac membranes and may improve pump function and ultimately quality of life in these people. The idea would be that they would live longer or just live better.”
The impact and implications of omega-3 in the food industry
The big interest in DHA over the past 30 years has come from the field of pediatrics. Algae-derived DHA often is incorporated into baby formula for breastfeeding mothers who do not eat fish or for those that do not breastfeed at all. “In clinical studies, you see that the visual acuity and mental alertness of the babies are better when they’re fed DHA-enriched formula over the standard formula,” says Stanley.
Stanley continues: “The current evidence in terms of vegetable-derived omega-3s may be of particular value in developing countries where supplements for DHA (fish oil capsules) or access to high-quality fish may not be readily accessible.”
Food manufacturers in developing countries are beginning to shift to plant-derived omega-3 PUFAs, which are relatively cheap and widely available. Despite those moves, the effects may be limited by the inefficient biochemical processing of the fatty acid — an issue that researchers have yet to resolve.
- 1. Dyerberg, J. et al. Am. J. Clin. Nutr. 28, 958 – 966 (1975).
- 2. Dyerberg, J. et al. Lancet. 2, 117 – 119 (1978).
- 3. Metcalf, R. G. et al. Am. J. Clin. Nutr. 85, 1222 – 1228 (2007).
- 4. de Goede, J. et al. PLoS ONE. 6, e17967 (2011).
- 5. Zhao, G., et al. J. Nutr. 134, 2991 – 2997 (2004).
- 6. Dewell, A. et al. J. Nutr. 141, 2166 – 2171 (2011).
- 7. James, M. et al. J. Clin. Nutr. 77, 1140 – 1145 (2003).
- 8. Dewell, A. et al. J. Nutr. 141, 2166 – 2171 (2011).
- 9. GISSI-Prevenzione Investigators. Lancet. 354, 447 – 455 (1999).
- 10. Khairallah, R. J. et al. Biochim. Biophys. Acta. 1797, 1555 – 1562 (2010).
- 11. O’Shea, K. M. et al. J. Mol. Cell. Cardiol. 47, 819 – 827 (2010).
Shannadora Hollis ([email protected]) received her B.S. in chemical engineering from North Carolina State University and is a Ph.D. student in the molecular medicine program at the University of Maryland, Baltimore. Her research focuses on the molecular mechanisms that control salt balance and blood pressure in health and disease. She is a native of Washington, D.C., and in her spare time enjoys cooking, thrift-store shopping and painting. | 3.012322 |
Courtroom atmospheres, deposition testimony, and cross-examinations have long-standing oral traditions and culture. How does an individual who does not speak participate in such traditions?
Individuals who have severe communication impairments of speech and/or writing may accomplish their communication potential through the use of augmentative and alternative communication (AAC). Communication through AAC techniques, symbols, and strategies, however, is not familiar to judges, attorneys, and court recorders within most courtrooms.
How do speech-language pathologists adequately prepare persons with complex communication needs (PWCCN) to participate within a cultural environment that is entrenched and centered on the spoken word? What graphic symbols best represent legal concepts such as "oath," "testimony," "swearing in," and "legal capacity"? How do PWCCN achieve their right to access justice when their "voice" is communicated through a communication assistant and/or through assistive technology? How may SLPs facilitate modifications within the justice system that allow for an appropriate amount of time for persons with severe physical challenges to respond to a rapid series of questions from attorneys or police? At present, access to justice for persons with severe expressive disorders is difficult.
The Legal Arena
Suppose that an SLP is invited to serve as an expert witness in a case involving a PWCCN. The SLP will work with police, lawyers, and judges in connection with a client. It will be necessary to establish an assessment tool that describes the capacity of the client to testify in court. As an expert witness, the SLP will be challenged immediately by opposing counsel regarding the SLP's competence as an expert as well as his or her choice of assessment tool(s).
SLPs also need to understand the key differences between the clinical and legal arenas. The justice system is centered on "winning" and "losing." Insurance companies participate in determining when to settle and "walk away" and end the case. Another difference is the process of evaluation of the client's communication skills. For example, sometimes a proposal for an evaluation must first be submitted to the court and both attorneys for approval before any contact with an individual is permitted. Thus, the SLP may prepare by reading hundreds of pages of clinical and educational reports regarding an individual with an expressive communication disability, and may then need to seek approval for each proposed diagnostic strategy before the actual evaluation. Modifications to the proposed plan may be suggested by either attorney or the judge.
Experts in litigation today must be familiar with the origin and significance of the Daubert case (Bernstein & Hartsell, 2005). This 1993 landmark decision (Daubert v. Merrell Dow Pharms. , 509 U.S. 579, 113 S. Ct. 2786, 125 L. Ed.2d 469) resulted in specific instructions for expert testimony introduced into the courtroom. Basically, Daubert's rule established requirements for admissibility of expert testimony, including whether or not the employed technique has been peer-reviewed and published, has a known error rate, can be tested, and is a generally accepted practice within the field.
As expert witnesses, SLPs need to prepare for testimony with the understanding that their scientific knowledge will be tested by the opposing attorney, challenged regarding peer reviews and publications, and examined for potential errors and general acceptance by their own scholarly community. Every word and comma in their expert reports will be scrutinized. Although SLPs may feel confident in their professional knowledge base and clinical skills in AAC, writing and defending the expert report within the legal system is very different from preparing a clinical report for a public school or medical facility. To prepare a report for testimony, SLPs need to translate their clinical knowledge into a legally useful form without using jargon, and to follow the rules, roles, and procedures for written reports according to legal tradition. These evaluations and reports must be precise so as not to introduce any reasonable doubt. Failure to understand the purpose and use of a written report may result in a damaging cross-examination and may undermine the SLP's credibility.
One example of potential difficulty is establishing a legal capacity for expressive communication when that expression is an alternative form to speech. As yet, there is no legal definition of "capacity" for testimony if not through speech. The definition of "capacity" is important—a client must be judged to have the "capacity" to participate, because a legal case may set a precedent. When assistive technologies, such as speech-generating devices (SGDs) or voice output communication aids (VOCAs), are introduced, the question arises: Does the legal capacity (or definition of expressive communication competence) shift when an SGD is used? In other words, if an individual communicates through technology, is the individual legally more capable as a witness than if he or she communicates without an SGD? Might SLPs need to perform two evaluations for the court? One evaluation might be conducted to determine "communication capacity" without technology and another evaluation might determine "communication capacity" with technology or AAC strategy.
Courtrooms may not be accustomed to working with people who use AAC systems. During depositions and testimony, court recorders transcribe speech, but now they must transcribe the language of graphic symbols as reported through communication assistants or through synthetic or digitized speech available within the various technologies. Legal counsel typically examines and cross-examines clients on the witness stand in the courtroom. However, the witness stand may not accommodate a person with a disability seated in a power wheelchair and his or her communication partner; SLPs may need to suggest modifications to courtroom seating arrangements. Judges may not accept testimony by a communication assistant in lieu of actual testimony by the client. Training programs for judges and attorneys may be necessary for greater acceptance of communication through AAC systems and other strategies.
Attorneys often challenge the origins of the communication messages; i.e., the "independence" of each communication message may be examined and cross-examined if programmed by the SLP. The "author" of each communication expression emerging from a synthesized or digitized SGD may be scrutinized. SLPs may be accused of speaking for individuals whom they are assisting. Such challenges can be addressed if the SLP orients attorneys and judges prior to the trial to the person's disabilities, use of AAC, types of vocabulary, and characteristics of appropriate questioning techniques for PWCCN. SLPs will need to understand that individuals are eligible for accommodations, and that they may be responsible for requesting accommodations on behalf of the individual and his or her assistants.
Scope of Practice Issues
Responsibilities for SLPs are expanding as public agencies are processing an increasing number of complaints on behalf of consumers. Cases of abuse, fraud, malpractice, and denial of basic services to PWCCN impact speech-language pathology practices because communication is often at the core of each case. In an administrative or court proceeding, SLPs may become involved in legal practices and procedures that extend beyond their education and training. SLPs need to acquire the knowledge and skills to assist individuals who use AAC in pursuing their basic human right to access justice (Huer et al., 2006). An SLP preparing to testify in these types of court cases should acquire knowledge and skills such as:
- Becoming familiar with the legal process, including understanding the steps and procedures for pre-trial processes, discovery, and investigation
- Learning the basic rules of law, including definitions such as legal "capacity" to testify, and consistency and reliability of testimony by PWCCN
- Identifying the various challenges to testimony and to evaluation
- Advocating for accommodations for PWCCN, when appropriate, throughout the legal process
SLPs who enter the legal arena must coordinate their activities with the attorney with whom they are working. "Full disclosure by the attorney of the nature and characteristics of the proceedings, a thorough review of the SLP's testimony, and extensive rehearsal are the key elements of a successful relationship and the necessary ingredients to maximize the potential for a positive outcome for the client," according to Lew Golinker, an attorney with the Assistive Technology Law Center in Ithaca, NY.
SLPs need to know the procedures involved in the filing of charges and questioning of clients. When a client who does not use speech for expressive communication is questioned, new challenges emerge. Procedural rules create the need for new or different types of practices or procedures in AAC. The conversations between the SLP and the client, the programming of AAC device, and the rules for conversations during court proceedings must be understood in advance; if not, the case may be thrown out or introduce "reasonable doubt," possibly affecting the outcome of the case.
In addition, procedural rules for legal proceedings demand that testimony during depositions and during a trial be the same. The person with a disability as well as SLPs need to understand the necessity for consistency and reliability of response every time the same question is asked and answered. Further, when communicating through alternative forms for expressive communication, it may be difficult to convey to a PWCCN—especially one with intellectual disability—the meaning of "testifying under oath." If the communication is through graphic symbols, what does the symbol for "oath" look like? (See page 7 for a photo that has been used to communicate this concept.)
The person with a disability should understand what to expect and what is expected prior to testifying in court. The SLP should realize, and explain to the client, that testimony during a police investigation is different from testifying in court, especially during cross-examination. Often, contact with police in filing a complaint is brief, and courtroom procedures often occur long after the initial complaint. This time lapse may prove challenging for a witness who has difficulty with long-term memory, and the SLP may need to find ways to remind the client about past events without leading the person to the "correct" answer.
Litigation consultation is a relatively new arena for SLPs. Legal advocacy for PWCCN is a complex process that is only beginning to be identified and understood by professionals in the field of AAC. While involvement in legal issues is an exciting extension of practice, SLPs should pursue additional education before entering into the legal arena (see sidebar for resources). During courtroom cross-examination, the written reports and professional credibility of the SLP are as much in question as the capacities of the person with a disability. With appropriate knowledge and skills, advocating for justice for people who use AAC is an important responsibility for SLPs. | 3.040353 |
Smoke signals, drum telegraphs, and the marathon runner are all examples of man’s effort to conquer the tyranny of distance. However, the first truly successful solution to the problem of rapidly transmitting language across space was the Frenchman Claude Chappe’s optical telegraph.
Chappe's chain of stone towers, topped by 10-ft. poles and 14-ft. pivoting cross members, and spaced as far apart as the eye could see, was first demonstrated to the public in March of 1791 on the Champs Elysees.
Chappe created a language of 9,999 words, each represented by a different position of the swinging arms. When operated by well-trained optical telegraphers, the system was extraordinarily quick. Messages could be transmitted up to 150 miles in two minutes.
Eventually the French military saw the value of Chappe’s invention, and lines of his towers were built out from Paris to Dunkirk and Strasbourg. Within a decade, a network of optical telegraph lines crisscrossed the nation. When Napoleon seized power in 1799, he used the optical telegraph to dispatch the message, “Paris is quiet and the good citizens are content.”
Renovated in 1998, the optical telegraph next to the Rohan Castle in Saverne functioned as part of the Strasbourg line from 1798 until 1852. It is one of several remaining relay points in the system that can still be visited today. | 3.479391 |
|Saint Bernard of
Clairvaux, Abbot - ca. 1090 - 1153
St. Bernard established the Cistercian order as a model for monastic reform throughout Europe and wrote influential commentaries on the Song of Songs and other topics. He is portrayed in the white habit of the order, often accompanied by a devil on a chain, which might refer to the temptations he overcame or possibly the exorcisms attributed to him in the Golden Legend. Another attribute is a white dog (example), referring to a dream his mother had when she was carrying him.
Feast day: August 20
At left, a Lippi painting of St. Bernard
A 1486 Filippino Lippi painting
A 1710 painting | 3.284753 |
Let’s Talk Art
"The way a child thinks about her art is more important than the way you think about it," says Herbert. "Never impose limitations and never say, 'I'm not good at this.' It introduces fear. Never evaluate a preschooler's music, art, or dance. Make observations from fact. Say, 'there is a red circle,' or 'see these three red lines.' Evaluating may inhibit creativity or discourage a child."
The concept of children understanding art in their own way is not new. Charlotte Mason, a liberal-thinking educator in the late 1800s, wrote in her book Home Education, "We cannot measure the influence that one or another artist has upon the children's sense of beauty, upon his power of seeing, as in a picture, the common sights of life; he is enriched more than we know in having really looked at a single picture."
Parents cannot travel inside their child's brain and ensure that all the educational efforts they make are learned, stored, and applied appropriately. They can be certain, though, that introducing art and music, which have struck emotional chords in humans worldwide for centuries, will enrich an education. The developing mind of a child will soak up whatever it is surrounded with, so why not provide the best history and culture we have to offer? | 3.530367 |
In the summer of 1892, Porter Nye and his family set up a homestead on the south shore of Lake Bemidji.
The area was the last territory in Minnesota to be opened for settlement, and the logging boom was just beginning.
According to local lore, Nye used some of the first boards produced by a mill on the Mississippi River between Lake Bemidji and Lake Irving to build a small schoolhouse on his homestead. Nye also was the first teacher at the school.
In 1902, J. Custer Moore teamed up with Nye to plat the 16-block town site of Nye-Moore, which evolved to Nyemore and, now, Nymore. The next year, residents of the village of Nymore passed a bond issue to build a wood frame school at the corner of Fifth Street and Lincoln Avenue Southeast. The community named the school after President Abraham Lincoln.
Also operating in Nymore was the four-grade East School on the current site of Lincoln Elementary School at 1617 Fifth St. N.E.
Land speculation took off after 1910 and the original Lincoln School became overcrowded. School taxes also were inadequate to maintain the building. Lincoln School was condemned by the State Department of Education, and on March 5, 1916, the Nymore Village Council petitioned the Bemidji City Council for annexation and school consolidation.
It was noted in the Bemidji Daily Pioneer that women voted in the Nymore annexation and school consolidation referendum held later that month. An April 19, 1916, article in the Pioneer stated: "With the annexation of Nymore, a new school will be necessary. A new building will cost about $50,000."
Students started the fall 1917 semester in the new brick school at Fifth Street and Lincoln Avenue. The building is now home to Mount Zion Church.
With the consolidation of the school districts, Bemidji also supplied Nymore with a 14-passenger bus to transport students.
In 1995, Bemidji School District voters approved construction of the new Lincoln Elementary School. Site work began in 1997, and students moved in in October 1999.
On July 3, 1999, the school district held a farewell open house at the 1917 building. About 1,000 current and former students, faculty and staff participated. They toured classrooms and viewed artifacts from the school's collection.
A special artifact, the portrait of President Lincoln originally hung in 1924 in the hallway at the 414 Lincoln Ave. S.E. school, moved with the students into the current Lincoln Elementary School. He now looks down on continuing generations of Lincoln Lakers in the school lobby.
Looking to the future, Lincoln Principal Tom Kusler said he expects the school population of about 500 students to remain steady, or even modestly increase.
"We're still able to maintain the same number of sections," he said.
There are 37 teachers and 21 classrooms at Lincoln.
The big changes will be in technology, he said. This year, three fifth-grade teachers began using electronic SMART Boards in a pilot project.
"We have 19 teachers that are getting SMART Boards now," he said.
Kusler said the purchases will come from the federal Title I funds, not the Bemidji School District budget.
"I think technologies are what we're going to be getting into more and more down the road," he said.
Information for this article came from "Celebrating Lincoln School," a Lincoln School History Project, and the Beltrami County Historical Society archives. | 3.237452 |
Quantum Time Waits for No
Quantum Theory, also quantum mechanics, in physics, a theory based
on using the concept of the quantum unit to describe the dynamic
properties of subatomic particles and the interactions of matter and
radiation. The foundation was laid by the German physicist Max
Planck, who postulated in 1900 that energy can be emitted or
absorbed by matter only in small, discrete units called quanta.
fundamental to the development of quantum mechanics was the
uncertainty principle, formulated by the German physicist Werner
Heisenberg in 1927, which states that the position and momentum of a
subatomic particle cannot be specified simultaneously.
Spectral Lines of Atomic Hydrogen: When an electron makes a
transition from one energy level to another, the electron emits a
photon with a particular energy. These photons are then observed as
emission lines using a spectroscope. The Lyman series involves
transitions to the lowest or ground state energy level.
to the second energy level are called the Balmer series. These
transitions involve frequencies in the visible part of the spectrum.
In this frequency range each transition is characterized by a
In the 18th and 19th centuries, Newtonian, or classical, mechanics
appeared to provide a wholly accurate description of the motions of
bodies—for example, planetary motion. In the late 19th and early
20th centuries, however, experimental findings raised doubts about
the completeness of Newtonian theory. Among the newer observations
were the lines that appear in the spectra of light emitted by heated
gases, or gases in which electric discharges take place.
model of the atom developed in the early 20th century by the English
physicist Ernest Rutherford, in which negatively charged electrons
circle a positive nucleus in orbits prescribed by Newton’s laws of
motion, scientists had also expected that the electrons would emit
light over a broad frequency range, rather than in the narrow
frequency ranges that form the lines in a spectrum.
Another puzzle for physicists was the coexistence of two theories of
light: the corpuscular theory, which explains light as a stream of
particles, and the wave theory, which views light as electromagnetic
waves. A third problem was the absence of a molecular basis for
In his book Elementary Principles in Statistical
Mechanics (1902), the American mathematical physicist J. Willard
Gibbs conceded the impossibility of framing a theory of molecular
action that reconciled thermodynamics, radiation, and electrical
phenomena as they were then understood.
At the turn of the century, physicists did not yet clearly recognize
that these and other difficulties in physics were in any way
related. The first development that led to the solution of these
difficulties was Planck’s introduction of the concept of the
quantum, as a result of physicists’ studies of blackbody radiation
during the closing years of the 19th century. (The term blackbody
refers to an ideal body or surface that absorbs all radiant energy
without any reflection.)
A body at a moderately high temperature — a
"red heat" — gives off most of its radiation in the low frequency (red
and infrared) regions; a body at a higher temperature — "white
heat" — gives off comparatively more radiation in higher frequencies
(yellow, green, or blue). During the 1890s physicists conducted
detailed quantitative studies of these phenomena and expressed their
results in a series of curves or graphs. The classical, or prequantum, theory predicted an altogether different set of curves
from those actually observed.
What Planck did was to devise a
mathematical formula that described the curves exactly; he then
deduced a physical hypothesis that could explain the formula. His
hypothesis was that energy is radiated only in quanta of energy hu,
where u is the frequency and h is the quantum action, now known as
The next important developments in quantum mechanics were the work
of German-born American physicist and Nobel laureate Albert
Einstein. He used Planck’s concept of the quantum to explain certain
properties of the photoelectric effect—an experimentally observed
phenomenon in which electrons are emitted from metal surfaces when
radiation falls on these surfaces.
According to classical theory, the energy, as measured by the
voltage of the emitted electrons, should be proportional to the
intensity of the radiation. The energy of the electrons, however,
was found to be independent of the intensity of radiation—which
determined only the number of electrons emitted—and to depend solely
on the frequency of the radiation. The higher the frequency of the
incident radiation, the greater is the electron energy; below a
certain critical frequency no electrons are emitted. These facts
were explained by Einstein by assuming that a single quantum of
radiant energy ejects a single electron from the metal.
of the quantum is proportional to the frequency, and so the energy
of the electron depends on the frequency.
In 1911 Rutherford established the existence of the atomic nucleus.
He assumed, on the basis of experimental evidence obtained from the
scattering of alpha particles by the nuclei of gold atoms, that
every atom consists of a dense, positively charged nucleus,
surrounded by negatively charged electrons revolving around the
nucleus as planets revolve around the sun.
electromagnetic theory developed by the British physicist James
Clerk Maxwell unequivocally predicted that an electron revolving
around a nucleus will continuously radiate electromagnetic energy
until it has lost all its energy, and eventually will fall into the
nucleus. Thus, according to classical theory, an atom, as described
by Rutherford, is unstable. This difficulty led the Danish physicist
Niels Bohr, in 1913, to postulate that in an atom the classical
theory does not hold, and that electrons move in fixed orbits. Every
change in orbit by the electron corresponds to the absorption or
emission of a quantum of radiation.
The application of Bohr’s theory to atoms with more than one
electron proved difficult. The mathematical equations for the next
simplest atom, the helium atom, were solved during the 1910s and
1920s, but the results were not entirely in accordance with
For more complex atoms, only approximate solutions of
the equations are possible, and these are only partly concordant
The French physicist Louis Victor de Broglie suggested in 1924 that
because electromagnetic waves show particle characteristics,
particles should, in some cases, also exhibit wave properties. This
prediction was verified experimentally within a few years by the
American physicists Clinton Joseph Davisson and Lester Halbert
Germer and the British physicist George Paget Thomson.
that a beam of electrons scattered by a crystal produces a
diffraction pattern characteristic of a wave (see Diffraction). The
wave concept of a particle led the Austrian physicist Erwin
Schrödinger to develop a so-called wave equation to describe the
wave properties of a particle and, more specifically, the wave
behavior of the electron in the hydrogen atom.
Although this differential equation was continuous and gave
solutions for all points in space, the permissible solutions of the
equation were restricted by certain conditions expressed by
mathematical equations called eigenfunctions (German eigen, "own").
The Schrödinger wave equation thus had only certain discrete
solutions; these solutions were mathematical expressions in which
quantum numbers appeared as parameters. (Quantum numbers are
integers developed in particle physics to give the magnitudes of
certain characteristic quantities of particles or systems.)
Schrödinger equation was solved for the hydrogen atom and gave
conclusions in substantial agreement with earlier quantum theory.
Moreover, it was solvable for the helium atom, which earlier theory
had failed to explain adequately, and here also it was in agreement
with experimental evidence. The solutions of the Schrödinger
equation also indicated that no two electrons could have the same
four quantum numbers—that is, be in the same energy state.
rule, which had already been established empirically by
Austro-American physicist and Nobel laureate Wolfgang Pauli in 1925,
is called the exclusion principle.
What is Matter
In the 20th century, physicists discovered that matter behaved as
both a wave and a particle. Austrian physicist and Nobel Prize
winner Erwin Schrödinger discussed this apparent paradox in a
lecture in Geneva, Switzerland, in 1952. A condensed and translated
version of his lecture appeared in Scientific American the following
What Is Matter?
The wave-particle dualism afflicting modern physics is best resolved
in favor of waves, believes the author, but there is no clear
picture of matter on which physicists can agree
Fifty years ago science seemed on the road to a clear-cut answer to
the ancient question which is the title of this article. It looked
as if matter would be reduced at last to its ultimate building
blocks—to certain submicroscopic but nevertheless tangible and
measurable particles. But it proved to be less simple than that.
Today a physicist no longer can distinguish significantly between
matter and something else. We no longer contrast matter with forces
or fields of force as different entities; we know now that these
concepts must be merged. It is true that we speak of "empty" space
(i.e., space free of matter), but space is never really empty,
because even in the remotest voids of the universe there is always
starlight—and that is matter. Besides, space is filled with
gravitational fields, and according to Einstein gravity and inertia
cannot very well be separated.
Thus the subject of this article is in fact the total picture of
space-time reality as envisaged by physics. We have to admit that
our conception of material reality today is more wavering and
uncertain than it has been for a long time. We know a great many
interesting details, learn new ones every week. But to construct a
clear, easily comprehensible picture on which all physicists would
agree—that is simply impossible.
Physics stands at a grave crisis of
ideas. In the face of this crisis, many maintain that no objective
picture of reality is possible. However, the optimists among us (of
whom I consider myself one) look upon this view as a philosophical
extravagance born of despair. We hope that the present fluctuations
of thinking are only indications of an upheaval of old beliefs which
in the end will lead to something better than the mess of formulas
which today surrounds our subject.
Since the picture of matter that I am supposed to draw does not yet
exist, since only fragments of it are visible, some parts of this
narrative may be inconsistent with others. Like Cervantes’ tale of Sancho Panza, who loses his donkey in one chapter but a few chapters
later, thanks to the forgetfulness of the author, is riding the dear
little animal again, our story has contradictions. We must start
with the well-established concept that matter is composed of
corpuscles or atoms, whose existence has been quite "tangibly"
demonstrated by many beautiful experiments, and with Max Planck’s
discovery that energy also comes in indivisible units, called
quanta, which are supposed to be transferred abruptly from one
carrier to another.
But then Sancho Panza’s donkey will return. For I shall have to ask
you to believe neither in corpuscles as permanent individuals nor in
the suddenness of the transfer of an energy quantum. Discreteness is
present, but not in the traditional sense of discrete single
particles, let alone in the sense of abrupt processes. Discreteness
arises merely as a structure from the laws governing the phenomena.
These laws are by no means fully understood; a probably correct
analogue from the physics of palpable bodies is the way various
partial tones of a bell derive from its shape and from the laws of
elasticity to which, of themselves, nothing discontinuous adheres.
The idea that matter is made up of ultimate particles was advanced
as early as the fifth century B.C. by Leucippus and Democritus, who
called these particles atoms. The corpuscular theory of matter was
lifted to physical reality in the theory of gases developed during
the 19th century by James Clerk Maxwell and Ludwig Boltzmann. The
concept of atoms and molecules in violent motion, colliding and
rebounding again and again, led to full comprehension of all the
properties of gases: their elastic and thermal properties, their
viscosity, heat conductivity and diffusion. At the same time it led
to a firm foundation of the mechanical theory of heat, namely, that
heat is the motion of these ultimate particles, which becomes
increasingly violent with rising temperature.
Within one tremendously fertile decade at the turn of the century
came the discoveries of X-rays, of electrons, of the emission of
streams of particles and other forms of energy from the atomic
nucleus by radioactive decay, of the electric charges on the various
particles. The masses of these particles, and of the atoms
themselves, were later measured very precisely, and from this was
discovered the mass defect of the atomic nucleus as a whole.
mass of a nucleus is less than the sum of the masses of its
component particles; the lost mass becomes the binding energy
holding the nucleus firmly together. This is called the packing
effect. The nuclear forces of course are not electrical forces—those
are repellent—but are much stronger and act only within very short
distances, about 10-13 centimeter.
Here I am already caught in a contradiction. Didn’t I say at the
beginning that we no longer assume the existence of force fields
apart from matter? I could easily talk myself out of it by saying:
Well, the force field of a particle is simply considered a part of
it. But that is not the fact. The established view today is rather
that everything is at the same time both particle and field.
Everything has the continuous structure with which we are familiar
in fields, as well as the discrete structure with which we are
equally familiar in particles. This concept is supported by
innumerable experimental facts and is accepted in general, though
opinions differ on details, as we shall see.
In the particular case of the field of nuclear forces, the particle
structure is more or less known. Most likely the continuous force
field is represented by the so-called pi mesons. On the other hand,
the protons and neutrons, which we think of as discrete particles,
indisputably also have a continuous wave structure, as is shown by
the interference patterns they form when diffracted by a crystal.
The difficulty of combining these two so very different character
traits in one mental picture is the main stumbling-block that causes
our conception of matter to be so uncertain.
Neither the particle concept nor the wave concept is hypothetical.
The tracks in a photographic emulsion or in a Wilson cloud chamber
leave no doubt of the behavior of particles as discrete units. The
artificial production of nuclear particles is being attempted right
now with terrific expenditure, defrayed in the main by the various
state ministries of defense. It is true that one cannot kill anybody
with one such racing particle, or else we should all be dead by now.
But their study promises, indirectly, a hastened realization of the
plan for the annihilation of mankind which is so close to all our
You can easily observe particles yourself by looking at a luminous
numeral of your wrist watch in the dark with a magnifying glass. The
luminosity surges and undulates, just as a lake sometimes twinkles
in the sun. The light consists of sparklets, each produced by a
so-called alpha particle (helium nucleus) expelled by a radioactive
atom which in this process is transformed into a different atom. A
specific device for detecting and recording single particles is the
Geiger-Müller counter. In this short résumé I cannot possibly
exhaust the many ways in which we can observe single particles.
Now to the continuous field or wave character of matter. Wave
structure is studied mainly by means of diffraction and
interference—phenomena which occur when wave trains cross each
other. For the analysis and measurement of light waves the principal
device is the ruled grating, which consists of a great many fine,
parallel, equidistant lines, closely engraved on a specular metallic
Light impinging from one direction is scattered by them and
collected in different directions depending on its wavelength. But
even the finest ruled gratings we can produce are too coarse to
scatter the very much shorter waves associated with matter. The fine
lattices of crystals, however, which Max von Laue first used as
gratings to analyze the very short X-rays, will do the same for
"matter waves." Directed at the surface of a crystal, high-velocity
streams of particles manifest their wave nature. With crystal
gratings physicists have diffracted and measured the wavelengths of
electrons, neutrons and protons.
What does Planck’s quantum theory have to do with all this? Planck
told us in 1900 that he could comprehend the radiation from red-hot
iron, or from an incandescent star such as the sun, only if this
radiation was produced in discrete portions and transferred in such
discrete quantities from one carrier to another (e.g., from atom to
This was extremely startling, because up to that time energy
had been a highly abstract concept. Five years later Einstein told
us that energy has mass and mass is energy; in other words, that
they are one and the same. Now the scales begin to fall from our
eyes: our dear old atoms, corpuscles, particles are Planck’s energy
quanta. The carriers of those quanta are themselves quanta. One gets
dizzy. Something quite fundamental must lie at the bottom of this,
but it is not surprising that the secret is not yet understood.
After all, the scales did not fall suddenly. It took 20 or 30 years.
And perhaps they still have not fallen completely.
The next step was not quite so far reaching, but important enough.
By an ingenious and appropriate generalization of Planck’s
hypothesis Niels Bohr taught us to understand the line spectra of
atoms and molecules and how atoms were composed of heavy, positively
charged nuclei with light, negatively charged electrons revolving
Each small system—atom or molecule—can harbor only
definite discrete energy quantities, corresponding to its nature or
its constitution. In transition from a higher to a lower "energy
level" it emits the excess energy as a radiation quantum of definite
wavelength, inversely proportional to the quantum given off. This
means that a quantum of given magnitude manifests itself in a
periodic process of definite frequency which is directly
proportional to the quantum; the frequency equals the energy quantum
divided by the famous Planck’s constant, h.
According to Einstein a particle has the energy mc2, m being the
mass of the particle and c the velocity of light. In 1925 Louis de Broglie drew the inference, which rather suggests itself, that a
particle might have associated with it a wave process of frequency
mc2 divided by h. The particle for which he postulated such a wave
was the electron. Within two years the "electron waves" required by
his theory were demonstrated by the famous electron diffraction
experiment of C. J. Davisson and L. H. Germer.
This was the starting
point for the cognition that everything — anything at all — is
simultaneously particle and wave field. Thus de Broglie’s
dissertation initiated our uncertainty about the nature of matter.
Both the particle picture and the wave picture have truth value, and
we cannot give up either one or the other. But we do not know how to
That the two pictures are connected is known in full generality with
great precision and down to amazing details. But concerning the
unification to a single, concrete, palpable picture opinions are so
strongly divided that a great many deem it altogether impossible. I
shall briefly sketch the connection. But do not expect that a
uniform, concrete picture will emerge before you; and do not blame
the lack of success either on my ineptness in exposition or your own
denseness—nobody has yet succeeded.
One distinguishes two things in a wave. First of all, a wave has a
front, and a succession of wave fronts forms a system of surfaces
like the layers of an onion. You are familiar with the
two-dimensional analogue of the beautiful wave circles that form on
the smooth surface of a pond when a stone is thrown in. The second
characteristic of a wave, less intuitive, is the path along which it
travels—a system of imagined lines perpendicular to the wave fronts.
These lines are known as the wave "normals" or "rays."
We can make the provisional assertion that these rays correspond to
the trajectories of particles. Indeed, if you cut a small piece out
of a wave, approximately 10 or 20 wavelengths along the direction of
propagation and about as much across, such a "wave packet" would
actually move along a ray with exactly the same velocity and change
of velocity as we might expect from a particle of this particular
kind at this particular place, taking into account any force fields
acting on the particle.
Here I falter. For what I must say now, though correct, almost
contradicts this provisional assertion. Although the behavior of the
wave packet gives us a more or less intuitive picture of a particle,
which can be worked out in detail (e.g., the momentum of a particle
increases as the wavelength decreases; the two are inversely
proportional), yet for many reasons we cannot take this intuitive
picture quite seriously. For one thing, it is, after all, somewhat
vague, the more so the greater the wavelength. For another, quite
often we are dealing not with a small packet but with an extended
wave. For still another, we must also deal with the important
special case of very small "packelets" which form a kind of
"standing wave" which can have no wave fronts or wave normals.
One interpretation of wave phenomena which is extensively supported
by experiments is this: At each position of a uniformly propagating
wave train there is a twofold structural connection of interactions,
which may be distinguished as "longitudinal" and "transversal." The
transversal structure is that of the wave fronts and manifests
itself in diffraction and interference experiments; the longitudinal
structure is that of the wave normals and manifests itself in the
observation of single particles. However, these concepts of
longitudinal and transversal structures are not sharply defined and
absolute, since the concepts of wave front and wave normal are not,
The interpretation breaks down completely in the special case of the
standing waves mentioned above. Here the whole wave phenomenon is
reduced to a small region of the dimensions of a single or very few
wavelengths. You can produce standing water waves of a similar
nature in a small basin if you dabble with your finger rather
uniformly in its center, or else just give it a little push so that
the water surface undulates. In this situation we are not dealing
with uniform wave propagation; what catches the interest are the
normal frequencies of these standing waves.
The water waves in the
basin are an analogue of a wave phenomenon associated with
electrons, which occurs in a region just about the size of the atom.
The normal frequencies of the wave group washing around the atomic
nucleus are universally found to be exactly equal to Bohr’s atomic
"energy levels" divided by Planck’s constant h. Thus the ingenious
yet somewhat artificial assumptions of Bohr’s model of the atom, as
well as of the older quantum theory in general, are superseded by
the far more natural idea of de Broglie’s wave phenomenon.
phenomenon forms the "body" proper of the atom. It takes the place
of the individual pointlike electrons which in Bohr’s model are
supposed to swarm around the nucleus. Such pointlike single
particles are completely out of the question within the atom, and if
one still thinks of the nucleus itself in this way one does so quite
consciously for reasons of expediency.
What seems to me particularly important about the discovery that
"energy levels" are virtually nothing but the frequencies of normal
modes of vibration is that now one can do without the assumption of
sudden transitions, or quantum jumps, since two or more normal modes
may very well be excited simultaneously. The discreteness of the
normal frequencies fully suffices—so I believe—to support the
considerations from which Planck started and many similar and just
as important ones—I mean, in short, to support all of quantum
The theory of quantum jumps is becoming more and more unacceptable,
at least to me personally, as the years go on. Its abandonment has,
however, far-reaching consequences. It means that one must give up
entirely the idea of the exchange of energy in well-defined quanta
and replace it with the concept of resonance between vibrational
frequencies. Yet we have seen that because of the identity of mass
and energy, we must consider the particles themselves as Planck’s
energy quanta. This is at first frightening. For the substituted
theory implies that we can no longer consider the individual
particle as a well-defined permanent entity.
That it is, in fact, no such thing can be reasoned in other ways.
For one thing, there is Werner Heisenberg’s famous uncertainty
principle, according to which a particle cannot simultaneously have
a well-defined position and a sharply defined velocity. This
uncertainty implies that we cannot be sure that the same particle
could ever be observed twice.
Another conclusive reason for not
attributing identifiable sameness to individual particles is that we
must obliterate their individualities whenever we consider two or
more interacting particles of the same kind, e.g., the two electrons
of a helium atom. Two situations which are distinguished only by the
interchange of the two electrons must be counted as one and the
same; if they are counted as two equal situations, nonsense obtains.
This circumstance holds for any kind of particle in arbitrary
numbers without exception.
Most theoreticians will probably accept the foregoing reasoning and
admit that the individual particle is not a well-defined permanent
entity of detectable identity or sameness. Nevertheless this
inadmissible concept of the individual particle continues to play a
large role in their ideas and discussions. Even deeper rooted is the
belief in "quantum jumps," which is now surrounded with a highly
abstruse terminology whose common-sense meaning is often difficult
For instance, an important word in the standing vocabulary
of quantum theory is "probability," referring to transition from one
level to another. But, after all, one can speak of the probability
of an event only assuming that, occasionally, it actually occurs. If
it does occur, the transition must indeed be sudden, since
intermediate stages are disclaimed. Moreover, if it takes time, it
might conceivably be interrupted halfway by an unforeseen
disturbance. This possibility leaves one completely at sea.
The wave v. corpuscle dilemma is supposed to be resolved by
asserting that the wave field merely serves for the computation of
the probability of finding a particle of given properties at a given
position if one looks for it there. But once one deprives the waves
of reality and assigns them only a kind of informative role, it
becomes very difficult to understand the phenomena of interference
and diffraction on the basis of the combined action of discrete
single particles. It certainly seems easier to explain particle
tracks in terms of waves than to explain the wave phenomenon in
terms of corpuscles.
"Real existence" is, to be sure, an expression which has been
virtually chased to death by many philosophical hounds. Its simple,
naive meaning has almost become lost to us. Therefore I want to
recall something else. I spoke of a corpuscle’s not being an
individual. Properly speaking, one never observes the same particle
a second time—very much as Heraclitus says of the river. You cannot
mark an electron, you cannot paint it red. Indeed, you must not even
think of it as marked; if you do, your "counting" will be false and
you will get wrong results at every step—for the structure of line
spectra, in thermodynamics and elsewhere. A wave, on the other hand,
can easily be imprinted with an individual structure by which it can
be recognized beyond doubt. Think of the beacon fires that guide
ships at sea.
The light shines according to a definite code; for
example: three seconds light, five seconds dark, one second light,
another pause of five seconds, and again light for three seconds—the
skipper knows that is San Sebastian. Or you talk by wireless
telephone with a friend across the Atlantic; as soon as he says,
"Hello there, Edward Meier speaking," you know that his voice has
imprinted on the radio wave a structure which can be distinguished
from any other.
But one does not have to go that far. If your wife
calls, "Francis!" from the garden, it is exactly the same thing,
except that the structure is printed on sound waves and the trip is
shorter (though it takes somewhat longer than the journey of radio
waves across the Atlantic). All our verbal communication is based on
imprinted individual wave structures. And, according to the same
principle, what a wealth of details is transmitted to us in rapid
succession by the movie or the television picture!
This characteristic, the individuality of the wave phenomenon, has
already been found to a remarkable extent in the very much finer
waves of particles. One example must suffice. A limited volume of
gas, say helium, can be thought of either as a collection of many
helium atoms or as a superposition of elementary wave trains of
matter waves. Both views lead to the same theoretical results as to
the behavior of the gas upon heating, compression, and so on.
when you attempt to apply certain somewhat involved enumerations to
the gas, you must carry them out in different ways according to the
mental picture with which you approach it. If you treat the gas as
consisting of particles, then no individuality must be ascribed to
them, as I said. If, however, you concentrate on the matter wave
trains instead of on the particles, every one of the wave trains has
a well-defined structure which is different from that of any other.
It is true that there are many pairs of waves which are so similar
to each other that they could change roles without any noticeable
effect on the gas. But if you should count the very many similar
states formed in this way as merely a single one, the result would
be quite wrong.
In spite of everything we cannot completely banish the concepts of
quantum jump and individual corpuscle from the vocabulary of
physics. We still require them to describe many details of the
structure of matter. How can one ever determine the weight of a
carbon nucleus and of a hydrogen nucleus, each to the precision of
several decimals, and detect that the former is somewhat lighter
than the 12 hydrogen nuclei combined in it, without accepting for
the time being the view that these particles are something quite
concrete and real?
This view is so much more convenient than the
roundabout consideration of wave trains that we cannot do without
it, just as the chemist does not discard his valence-bond formulas,
although he fully realizes that they represent a drastic
simplification of a rather involved wave-mechanical situation.
If you finally ask me: "Well, what are these corpuscles, really?" I
ought to confess honestly that I am almost as little prepared to
answer that as to tell where Sancho Panza’s second donkey came from.
At the most, it may be permissible to say that one can think of
particles as more or less temporary entities within the wave field
whose form and general behavior are nevertheless so clearly and
sharply determined by the laws of waves that many processes take
place as if these temporary entities were substantial permanent
beings. The mass and the charge of particles, defined with such
precision, must then be counted among the structural elements
determined by the wave laws.
The conservation of charge and mass in
the large must be considered as a statistical effect, based on the
"law of large numbers."
Simultaneously with the development of wave mechanics, Heisenberg
evolved a different mathematical analysis known as matrix mechanics.
According to Heisenberg’s theory, which was developed in
collaboration with the German physicists Max Born and Ernst Pascual
Jordan, the formula was not a differential equation but a matrix: an
array consisting of an infinite number of rows, each row consisting
of an infinite number of quantities.
Matrix mechanics introduced
infinite matrices to represent the position and momentum of an
electron inside an atom. Also, different matrices exist, one for
each observable physical property associated with the motion of an
electron, such as energy, position, momentum, and angular momentum.
These matrices, like Schrödinger’s differential equations, could be
solved; in other words, they could be manipulated to produce
predictions as to the frequencies of the lines in the hydrogen
spectrum and other observable quantities.
Like wave mechanics,
matrix mechanics was in agreement with the earlier quantum theory
for processes in which the earlier quantum theory agreed with
experiment; it was also useful in explaining phenomena that earlier
quantum theory could not explain.
Schrödinger subsequently succeeded in showing that wave mechanics
and matrix mechanics are different mathematical versions of the same
theory, now called quantum mechanics. Even for the simple hydrogen
atom, which consists of two particles, both mathematical
interpretations are extremely complex. The next simplest atom,
helium, has three particles, and even in the relatively simple
mathematics of classical dynamics, the three-body problem (that of
describing the mutual interactions of three separate bodies) is not
The energy levels can be calculated accurately,
however, even if not exactly. In applying quantum-mechanics
mathematics to relatively complex situations, a physicist can use
one of a number of mathematical formulations. The choice depends on
the convenience of the formulation for obtaining suitable
Although quantum mechanics describes the atom purely in terms of
mathematical interpretations of observed phenomena, a rough verbal
description can be given of what the atom is now thought to be like.
Surrounding the nucleus is a series of stationary waves; these waves
have crests at certain points, each complete standing wave
representing an orbit. The absolute square of the amplitude of the
wave at any point is a measure of the probability that an electron
will be found at that point at any given time.
Thus, an electron can
no longer be said to be at any precise point at any given time.
The impossibility of pinpointing an electron at any precise time was
analyzed by Heisenberg, who in 1927 formulated the uncertainty
principle. This principle states the impossibility of simultaneously
specifying the precise position and momentum of any particle. In
other words, the more accurately a particle’s momentum is measured
and known, the less accuracy there can be in the measurement and
knowledge of its position.
This principle is also fundamental to the
understanding of quantum mechanics as it is generally accepted
today: The wave and particle character of electromagnetic radiation
can be understood as two complementary properties of radiation.
Another way of expressing the uncertainty principle is that the
wavelength of a quantum mechanical principle is inversely
proportional to its momentum. As atoms are cooled they slow down and
their corresponding wavelength grows larger.
At a low enough
temperature this wavelength is predicted to exceed the spacing
between particles, causing atoms to overlap, becoming
indistinguishable, and melding into a single quantum state. In 1995
a team of Colorado scientists, led by National Institutes of
Standards and Technology physicist Eric Cornell and University of
Colorado physicist Carl Weiman, cooled rubidium atoms to a
temperature so low that the particles entered this merged state,
known as a Bose-Einstein condensate.
The condensate essentially
behaves like one atom even though it is made up of thousands.
- Physicists Condense Supercooled Atoms, Forming New State of Matter
A team of Colorado physicists has cooled atoms of gas to a
temperature so low that the particles entered a merged state, known
as a "Bose-Einstein condensate." This phenomenon was first predicted
about 70 years ago by the theories of German-born American physicist
Albert Einstein and Indian physicist Satyendra Nath Bose. The
condensed particles are considered a new state of matter, different
from the common states of matter—gas, liquid, and solid—and from
plasma, a high temperature, ionized form of matter that is found in
the sun and other stars.
Physicists have great expectations for the application of this
discovery. Because the condensate essentially behaves like one atom
even though it is made up of thousands, investigators should be able
to measure interactions at the atomic and subatomic level that were
previously extremely difficult, if not impossible, to study
The condensate was detected June 5 by a Colorado team led by
National Institutes of Standards and Technology physicist Eric
Cornell and University of Colorado physicist Carl Wieman. Their
discovery was reported in the journal Science on July 14. Cornell
and Wieman formed their condensate from rubidium gas.
Several groups of physicists, including the teams in Texas and
Colorado and a group at the Massachusetts Institute of Technology,
have been working to form pure condensate in recent years. The goal
of the investigations has been to create a pure chunk of condensate
out of atoms in an inert medium, such as a diffuse, nonreactive gas.
The effort began when methods of cooling and trapping became refined
enough that it seemed possible to reach the required conditions of
temperature and density.
The Colorado team used two techniques: first laser cooling and then
evaporative cooling. The laser technique used laser light whose
frequency was carefully tuned to interact with the rubidium atoms
and gently reduce their speeds. A number of lasers were aimed at the
gas to slow the motion of the atoms in different directions.
The Colorado physicists then switched to evaporative cooling. In
this method, the gas is "trapped" by a magnetic field that dwindles
to zero at its center. Atoms that are moving wander out of the
field, while the coldest atoms cluster at the center. Because a few
very cold atoms could still escape at the zero field point of the
trap, the physicists perfected their system by adding a second
slowly circling magnetic field so that the zero point moved, not
giving the atoms the chance to escape through it.
Physicists will now begin to explore the properties of the
condensate and see what other materials they can use to form it. One
unusual characteristic of the condensate is that it is composed of
atoms that have lost their individual identities. This is analogous
to laser light, which is composed of light particles, or photons,
that similarly have become indistinguishable and all behave in
exactly the same manner. The laser has found a myriad of uses both
in practical applications and in theoretical research, and the
Bose-Einstein condensate may turn out to be just as important. Some
scientists speculate that if a condensate can be readily produced
and sustained, it could be used to miniaturize and speed up computer
components to a scale and quickness not possible before.
The prediction that a merged form of matter will emerge at extremely
low temperatures is based on a number of aspects of the quantum
theory. This theory governs the interaction of particles on a
subatomic scale. The basic principle of quantum theory is that
particles can only exist in certain discrete energy states.
The exact "quantum state" of a particle takes into consideration
such factors as the position of the particle and its "spin," which
can only have certain discrete values. A particle’s spin categorizes
it as either a boson or a fermion. Those two groups of particles
behave according to different sets of statistical rules. Bosons have
spins that are a constant number multiplied by an integer (e.g., 0,
1, 2, 3). Fermions have spins that are that same constant multiplied
by an odd half-integer (1/2, 3/2, 5/2, etc.). Examples of fermions
are the protons and neutrons that make up an atom’s nucleus, and
Composite particles, such as nuclei and atoms, are classified as
bosons or fermions based on the sum of the spins of their
constituent particles. For instance, an isotope of helium called
helium-4 turns out to be a bose particle. Helium-4 is made up of six
fermi particles: two electrons orbiting a nucleus made up of two
protons and two neutrons. Adding up six odd half-integers will yield
a whole integer, making helium-4 a boson. The atoms of rubidium used
in the Colorado experiment are bose particles as well. Only bose
atoms may form a condensate, but they do so only at a sufficiently
low temperature and high density.
At their lab in Colorado, Cornell and Wieman cooled a rubidium gas
down to a temperature as close to absolute zero, the temperature at
which particles stop moving, as they could get. The slower the
particles, the lower their momentum. In essence, the cooling brought
the momentum of the gas particles closer and closer to precisely
zero, as the temperature decreased to within a few billionths of a
degree Kelvin. (Kelvin degrees are on the scale of degrees Celsius,
but zero Kelvin is absolute zero, while zero Celsius is the freezing
point of water.)
As the temperature, and thus the momentum, of the gas particles
dropped to an infinitesimal amount, the possible locations of the
atom at any given moment increased proportionally. The goal of the
experiment was to keep the gas atoms packed together closely enough
that during this process—as their momentum got lower and lower, and
their wavelengths got larger and larger—their waves would begin to
overlap. This interplay of position and movement in three dimensions
with the relative distances between particles is known as the
phase-space density and is the key factor in forming a condensate.
In essence, the momentum of the atoms would become so precisely
pinpointed (near zero) that their position would become less and
less certain and there would be a relatively large amount of space
that would define each atom’s position. As the atoms slowed to
almost a stop, their positions became so fuzzy that each atom came
to occupy the same position as every other atom, losing their
individual identity. This odd phenomenon is a Bose-Einstein
As their experimental conditions neared the realm of Bose-Einstein
condensation, Cornell and Wieman noticed an abrupt rise in the peak
density of their sample, a type of discontinuity that strongly
indicates a phase transition. The Colorado physicists estimated that
after progressive evaporative cooling of the rubidium, they were
left with a nugget of about 2,000 atoms of pure condensate.
and Wieman then released the atoms from the "trap" in which they had
been cooling and sent a pulse of laser light at the condensate,
basically blowing it apart. They recorded an image of the expanding
cloud of atoms. Prior to the light pulse, when the density dropped
after the atoms were released, the physicists believed the
temperature of the condensate fell to an amazing frigidity of 20
nanoKelvins (20 billionths of one degree above absolute zero).
The image showed a larger, expanding sphere of particles with a
smaller, more concentrated elliptical-looking center. Cornell and
Wieman observed that when a gas is constrained and then released (in
an extreme example, as in a bomb), thermodynamics specifies that it
will expand outward equally in all directions regardless of the
shape in which it had been contained. This occurs because the
particles in that gas, even if the gas was very cold, were moving in
all different directions with various energies when the gas was
This rule of uniform expansion does not hold for a Bose-Einstein
condensate. Because the particles were all acting in exactly the
same manner at the time of the light pulse, their expansion should
give some indication of the shape of the space they had previously
inhabited. The uneven, elliptical-looking clump of atoms in the
center of the image recorded by Cornell and Wieman thus gave further
proof that a condensate had formed.
Bose-Einstein characteristics have been observed in other systems,
specifically, in superfluid liquid helium-4 and in superconductors.
It is believed that liquid helium-4 at a sufficiently low
temperature is composed of two components mixed together, the colder
of which is a Bose-Einstein condensate. Liquid helium-4, which at
very low temperatures is also a superconductor of heat, behaves in
dramatic ways, trickling up the sides of containers and rising in
Electrical superconductors are also boson-related phenomena. In
superconductors, which are also formed by supercooling, electrical
resistance disappears. In this case it is the electrons within a
substance’s atoms, rather than the atoms themselves, that condense.
The electrons pair up, together forming a particle of zero spin.
These paired electrons merge into an overall substance that flows
freely through the superconductor, offering no resistance to
Thus, once initiated, a current can flow
indefinitely in a superconductor.
Quantum mechanics solved all of the great difficulties that troubled
physicists in the early years of the 20th century. It gradually
enhanced the understanding of the structure of matter, and it
provided a theoretical basis for the understanding of atomic
structure (see Atom and Atomic Theory) and the phenomenon of
spectral lines: Each spectral line corresponds to the energy of a
photon transmitted or absorbed when an electron makes a transition
from one energy level to another.
The understanding of chemical
bonding was fundamentally transformed by quantum mechanics and came
to be based on Schrödinger’s wave equations. New fields in physics
emerged—condensed matter physics, superconductivity, nuclear
physics, and elementary particle physics (see Physics)—that all
found a consistent basis in quantum mechanics.
FURTHER DEVELOPMENTS: In the years since 1925, no fundamental
deficiencies have been found in quantum mechanics, although the
question of whether the theory should be accepted as complete has
come under discussion. In the 1930s the application of quantum
mechanics and special relativity to the theory of the electron (see
Quantum Electrodynamics) allowed the British physicist Paul Dirac to
formulate an equation that referred to the existence of the spin of
the electron. It further led to the prediction of the existence of
the positron, which was experimentally verified by the American
physicist Carl David Anderson.
The application of quantum mechanics to the subject of
electromagnetic radiation led to explanations of many phenomena,
such as bremsstrahlung (German, "braking radiation," the radiation
emitted by electrons slowed down in matter) and pair production (the
formation of a positron and an electron when electromagnetic energy
interacts with matter). It also led to a grave problem, however,
called the divergence difficulty: Certain parameters, such as the
so-called bare mass and bare charge of electrons, appear to be
infinite in Dirac’s equations.
(The terms bare mass and bare charge
refer to hypothetical electrons that do not interact with any matter
or radiation; in reality, electrons interact with their own electric
This difficulty was partly resolved in 1947-49 in a program
called renormalization, developed by the Japanese physicist
Shin’ichirô Tomonaga, the American physicists Julian S. Schwinger
and Richard Feynman, and the British physicist Freeman Dyson. In
this program, the bare mass and charge of the electron are chosen to
be infinite in such a way that other infinite physical quantities
are canceled out in the equations.
Renormalization greatly increased
the accuracy with which the structure of atoms could be calculated
from first principles.
Theoretical physicist C. Llewellyn Smith discusses the discoveries
that scientists have made to date about the electron and other
elementary particles—subatomic particles that scientists believe
cannot be split into smaller units of matter. Scientists have
discovered what Smith refers to as sibling and cousin particles to
the electron, but much about the nature of these particles is still
One way scientists learn about these particles is to
accelerate them to high energies, smash them together, and then
study what happens when they collide. By observing the behavior of
these particles, scientists hope to learn more about the fundamental
structures of the universe.
Electrons: The First Hundred Years
The discovery of the electron was announced by J. J. Thomson just
over 100 years ago, on April 30, 1897. In the intervening years we
have come to understand the mechanics that describe the behavior of
electrons—and indeed of all matter on a small scale—which is called
quantum mechanics. By exploiting this knowledge, we have learned to
manipulate electrons and make devices of a tremendous practical and
economic importance, such as transistors and lasers.
Meanwhile, what have we learned of the nature of the electron
itself? From the start, electrons were found to behave as elementary
particles, and this is still the case today. We know that if the
electron has any structure, it is on a scale of less than 1018 m,
i.e. less than 1 billionth of 1 billionth of a meter.
However, a major complication has emerged. We have discovered that
the electron has a sibling and cousins that are apparently equally
fundamental. The sibling is an electrically neutral particle, called
the neutrino, which is much lighter than the electron. The cousins
are two electrically charged particles, called the mu and the
which also have neutral siblings. The mu and the tau seem to be
identical copies of the electron, except that they are respectively
200 and 3,500 times heavier. Their role in the scheme of things and
the origin of their different masses remain mysteries — just the sort
of mysteries that particle physicists, who study the constituents of
matter and the forces that control their behavior, wish to resolve.
We therefore know of six seemingly fundamental particles, the
electron, the mu, the tau and their neutral siblings, which—like the
electron—do not feel the nuclear force, and incidentally are known
generically as leptons.
What about the constituents of atomic nuclei, which of course do
feel the nuclear force? At first sight, nuclei are made of protons
and neutrons, but these particles turned out not to be elementary.
It was found that when protons and neutrons are smashed together,
new particles are created. We now know that all these particles are
made of more elementary entities, called quarks. In a collision,
pairs of quarks and their antiparticles, called antiquarks, can be
created: part of the energy (e) of the incoming particles is turned
into mass (m) of these new particles, thanks to the famous
equivalence e = mc2. The quarks in the projectiles and the created
quark-antiquark pairs can then rearrange themselves to make various
different sorts of new particles.
Today, six types of quarks are known which, like the leptons (the
electron and its relations) have simple properties, and could be
elementary. In the past 30 years a recipe that describes the
behavior of these particles has been developed. It is called the
"Standard Model" of particle physics. However, we lack a real
understanding of the nature of these particles, and the logic behind
the Standard Model. What is wrong with the Standard Model?
First, it does not consistently combine Einstein’s theory of the
properties of space (called General Relativity) with a quantum
mechanical description of the properties of matter. It is therefore
Second, it contains too many apparently arbitrary futures—it is too
baroque, too byzantine—to be complete. It does not explain the role
of the mu and the tau, or answer the question whether the fact that
the numbers of leptons and quarks are the same—six each—is a
coincidence, or an indication of a deep connection between these
different types of particles. On paper, we can construct theories
that give better answers and explanations, and in which there are
such connections, but we do not know which, if any, of these
theories is correct.
Third, it has a missing, untested, element. This is not some minor
detail, but a central element, namely a mechanism to generate the
observed masses of the known particles, and hence also the different
ranges of the known forces (long range for gravity and
electromagnetism, as users of magnetic compasses know, but very
short range for the nuclear and the so-called weak forces, although
in every other respect these forces appear very similar). On paper,
a possible mechanism is known, called the Higgs mechanism, after the
British physicist Peter Higgs who invented it. But there are
alternative mechanisms, and in any case the Higgs mechanism is a
generic idea. We not only need to know if nature uses it, but if so,
how it is realized in detail.
Luckily the prospects of developing a deeper understanding are good.
The way forward is to perform experiments that can distinguish the
different possibilities. We know that the answer to the mystery of
the origin of mass, and the different ranges of forces, and certain
other very important questions, must lie in an energy range that
will be explored in experiments at the Large Hadron Collider, a new
accelerator now under construction at CERN [also known as the
European Laboratory for Particle Physics] near Geneva.
The fundamental tools on which experimental particle physics depends
are large accelerators, like the Large Hadron Collider, which
accelerate particles to very high energies and smash them together.
By studying what happens in the collisions of these particles, which
are typically electrons or protons (the nuclei of hydrogen atoms),
we can learn about their natures. The conditions that are created in
these collisions of particles existed just after the birth of the
universe, when it was extremely hot and dense. Knowledge derived
from experiments in particle physics is therefore essential input
for those who wish to understand the structure of the universe as a
whole, and how it evolved from an initial fireball into its present
The Large Hadron Collider will therefore not only open up a large
new window on the nature of matter, when it comes into operation in
2005, but also advance our understanding of the structure of the
universe. However, although it will undoubtedly resolve some major
questions and greatly improve our knowledge of nature, it would be
very surprising if it established a "final theory."
The only candidate theory currently known which appears to have the
potential to resolve all the problems mentioned above—the reason for
the existence of the mu and tau, reconciliation of
general relativity with quantum mechanics, etc.—describes the
electron and its relatives and the quarks, not as pointlike objects,
but as different vibrating modes of tiny strings. However, these
strings are so small (10-35 m) that they will never be observed
If this is so, the electron and the other known particles
will continue forever to appear to be fundamental pointlike objects,
even if the—currently very speculative—"string theory" scores enough
successes to convince us that this is not the case!
FUTURE PROSPECTS: Quantum mechanics underlies current attempts to
account for the strong nuclear force and to develop a unified theory for all the fundamental interactions
Nevertheless, doubts exist about the completeness of quantum theory.
The divergence difficulty, for example, is only partly resolved.
Just as Newtonian mechanics was eventually amended by quantum
mechanics and relativity, many scientists—and Einstein was among
them—are convinced that quantum theory will also undergo profound
changes in the future.
Great theoretical difficulties exist, for
example, between quantum mechanics and chaos theory, which began to
develop rapidly in the 1980s.
Ongoing efforts are being made by
theorists such as the British physicist Stephen Hawking, to develop
a system that encompasses both relativity and quantum mechanics.
Breakthroughs occurred in the area of quantum computing in the late
1990s. Quantum computers under development use components of a
chloroform molecule (a combination of chlorine and hydrogen atoms)
and a variation of a medical procedure called magnetic resonance
imaging (MRI) to compute at a molecular level. Scientists used a
branch of physics called quantum mechanics, which describes the
activity of subatomic particles (particles that make up atoms), as
the basis for quantum computing.
Quantum computers may one day be
thousands to millions of times faster than current computers,
because they take advantage of the laws that govern the behavior of
subatomic particles. These laws allow quantum computers to examine
all possible answers to a query at one time.
Future uses of quantum
computers could include code breaking and large database queries.
Quantum Time Waits for No Cosmos
THE INTRIGUING notion that time might run backwards when the
Universe collapses has run into difficulties. Raymond Laflamme, of
the Los Alamos National Laboratory in New Mexico, has carried out a
new calculation which suggests that the Universe cannot start out
uniform, go through a cycle of expansion and collapse, and end up in
a uniform state. It could start out disordered, expand, and then
collapse back into disorder. But, since the COBE data show that our
Universe was born in a smooth and uniform state, this symmetric
possibility cannot be applied to the real Universe.
Physicists have long puzzled over the fact that two distinct "arrows
of time" both point in the same direction. In the everyday world,
things wear out -- cups fall from tables and break, but broken cups
never re- assemble themselves spontaneously. In the expanding
Universe at large, the future is the direction of time in which
galaxies are further apart.
Many years ago, Thomas Gold suggested that these two arrows might be
linked. That would mean that if and when the expansion of the
Universe were to reverse, then the everyday arrow of time would also
reverse, with broken cups re-assembling themselves.
More recently, these ideas have been extended into quantum physics.
There, the arrow of time is linked to the so-called "collapse of the
wave function", which happens, for example, when an electron wave
moving through a TV tube collapses into a point particle on the
screen of the TV. Some researchers have tried to make the quantum
description of reality symmetric in time, by including both the
original state of the system (the TV tube before the electron passes
through) and the final state (the TV tube after the electron has
passed through) in one mathematical description.
Murray Gell-Mann and James Hartle recently extended this idea to the
whole Universe. They argued that if, as many cosmologists believe
likely, the Universe was born in a Big Bang, will expand out for a
finite time and then recollapse into a Big Crunch, the time-neutral
quantum theory could describe time running backwards in the
contracting half of its life.
Unfortunately, Laflamme has now shown that this will not work. He
has proved that if there are only small inhomogeneities present in
the Big Bang, then they must get larger throughout the lifetime of
the Universe, in both the expanding and the contracting phases. "A
low entropy Universe at the Big Bang cannot come back to low entropy
at the Big Crunch" (Classical and Quantum Gravity, vol 10 p L79). He
has found time-asymmetric solutions to the equations -- but only if
both Big Bang and Big Crunch are highly disordered, with the
Universe more ordered in the middle of its life.
Observations of the cosmic microwave background radiation show that
the Universe emerged from the Big Bang in a very smooth and uniform
state. This rules out the time-symmetric solutions.
is that even if the present expansion of the Universe does reverse,
time will not run backwards and broken cups will not start re- | 3.789452 |
Glucose is a type of sugar. It comes from food, and is also created in the liver. Glucose travels through the body in the blood. It moves from the blood to cells with the help of a hormone called insulin. Once glucose is in those cells, it can be used for energy.
Diabetes is a condition that makes it difficult for the body to use glucose. This causes a buildup of glucose in the blood. It also means the body is not getting enough energy. Type 2 diabetes is one type of diabetes. It is the most common type.
Medication, lifestyle changes, and monitoring can help control blood glucose levels.
Type 2 diabetes is often caused by a combination of factors. One factor is that your body begins to make less insulin. A second factor is that your body becomes resistant to insulin. This means there is insulin in your body, but your body cannot use it effectively. Insulin resistance is often related to excess body fat.
The doctor will ask about your symptoms and medical history. You will also be asked about your family history. A physical exam will be done.
Diagnosis is based on the results of blood testing. American Diabetes Association (ADA) recommends diagnosis be made if you have one of the following:
Symptoms of diabetes and a
random blood test
with a blood sugar level greater than or equal to 200 mg/dL (11.1 mmol/L)
- Fasting blood sugar test—Done after you have not eaten for eight or more hours—Showing blood sugar levels greater than or equal to 126 mg/dL (7 mmol/L) on two different days
- Glucose tolerance test—Measuring blood sugar two hours after you eat glucose—Showing glucose levels greater than or equal to 200 mg/dL (11.1 mmol/L)
- HbA1c level of 6.5% or higher—Indicates poor blood sugar control over the past 2-4 months
mg/dL = milligrams per deciliter of blood; mmol/L = millimole per liter of blood
Treatment aims to:
- Maintain blood sugar at levels as close to normal as possible
- Prevent or delay complications
- Control other conditions that you may have, like high blood pressure and high cholesterol
Food and drinks have a direct effect on your blood glucose level. Eating healthy meals can help you control your blood glucose. It will also help your overall health. Some basic tips include:
If you are overweight, weight loss will help your body use insulin better. Talk to your doctor about a healthy weight goal. You and your doctor or dietitian can make a safe meal plan for you.
These options may help you lose weight:
Physical activity can:
- Make the body more sensitive to insulin
- Help you reach and maintain a healthy weight
- Lower the levels of fat in your blood
exercise is any activity that increases your heart rate. Resistance training helps build muscle strength. Both types of exercise help to improve
long-term glucose control. Regular exercise can also help reduce your risk of heart disease.
Talk to your doctor about an activity plan. Ask about any precautions you may need to take.
Certain medicines will help to manage blood glucose levels.
Medication taken by mouth may include:
- Metformin—To reduce the amount of glucose made by the body and to make the body more sensitive to insulin
Medications that encourage the pancreas to make more insulin such as sulfonylureas (glyburide,
tolazamide), dipeptidyl peptidase-4 inhibitors (saxagliptin,
Insulin sensitizers such as
pioglitazone—To help the body use insulin better
Starch blockers such as
miglitol—To decrease the amount of glucose absorbed into the blood
Some medicine needs to be given through injections, such as:
Incretin-mimetics such as
stimulate the pancreas to produce insulin and decrease appetite (can assist with weight loss)
Amylin analogs such as
replace a protein of the pancreas that is low in people with type 2 diabetes
Insulin may be needed if:
- The body does not make enough of its own insulin.
- Blood glucose levels cannot be controlled with lifestyle changes and medicine.
Insulin is given through injections.
Blood Glucose Testing
You can check the level of glucose in your blood with a blood glucose meter. Checking your blood glucose levels
during the day can help you stay on track. It will also help your doctor determine if your treatment is working. Keeping track of blood sugar levels is especially important if you take insulin.
Regular testing may not be needed if your diabetes is under control and you don't take insulin. Talk with your doctor before stopping blood sugar monitoring.
may also be done at your doctor's office. This is a measure of blood glucose control over a long period of time. Doctors advise that most people keep their HbA1c levels below 7%. Your exact goal may be different. Keeping HbA1c in your goal range can help lower the chance of complications.
Decreasing Risk of Complications
Over a long period of time, high blood glucose levels can damage vital organs. The kidneys, eyes, and nerves are most affected. Diabetes can also increase your risk of heart disease.
Maintaining goal blood glucose levels is the first step to lowering your risk of these complications. Other steps include:
- Take good care of your feet. Be on the lookout for any sores or irritated areas. Keep your feet dry and clean.
- Have your eyes checked once a year.
- Don't smoke. If you do, look for programs or products that can help you quit.
- Plan medical visits as recommended. | 3.690469 |
"We believe this is the first time bacterial horizontal gene transfer has been observed in eukaryotes at such scale," says senior author Igor Grigoriev of DOE JGI. "This study gets us closer to explaining the dramatic diversity across the genera of diatoms, morphologically, behaviorally, but we still haven't yet explained all the differences conferred by the genes contributed by the other taxa."
From plants, the diatom inherited photosynthesis, and from animals the production of urea. Bowler speculates that the diatom uses urea to store nitrogen, not to eliminate it like animals do, because nitrogen is a precious nutrient in the ocean. What's more, the tiny alga draws the best of both worldsit can convert fat into sugar, as well as sugar into fatextremely useful in times of nutrient shortage.
The team documented more than 300 genes sourced from bacteria and found in both types of diatoms, pointing to their ancient origin and suggesting novel mechanisms of managing nutrientsfor example utilization of organic carbon and nitrogenand detecting cues from their environment.
Diatoms, encapsulated by elaborate lacework-like shells made of glass, are only about one-third of a strand of hair in diameter. "The diatom genomes will help us to understand how they can make these structures at ambient temperatures and pressures, something that humans are not able to do. If we can learn how they do it, we could open up all kinds of new nanotechnologies, like for building miniature silicon chips or for biomedical applications," says Bowler.
Diatoms reside in fresh or salt water and can be divided into two camps, centrics and pennates. The centric Thalassiosira resemble a round "Camembert" cheese box (only much smaller) and pennates like Phaeodactylum look more like a cross between a boomerang and a narrow three-cornered hathence the species name, tricornutum. Not only is their shape and habitat dive
|Contact: David Gilbert|
DOE/Joint Genome Institute | 3.772807 |
The amount of nitrogen entering the Gulf each spring has increased about 300 percent since the 1960s, mainly due to increased agricultural runoff, Scavia said.
"Yes, the floodwaters really matter, but the fact that there's so much more nitrogen in the system now than there was back in the '60s is the real issue," he said. Scavia's computer model suggests that if today's floods contained the level of nitrogen from the last comparable flood, in 1973, the predicted dead zone would be 5,800 square miles rather than 8,500.
"The growth of these dead zones is an ecological time bomb," Scavia said. "Without determined local, regional and national efforts to control them, we are putting major fisheries at risk." The Gulf of Mexico/Mississippi River Watershed Nutrient Task Force has set the goal of reducing the size of the dead zone to about 1,900 square miles.
In 2009, the dockside value of commercial fisheries in the Gulf was $629 million. Nearly 3 million recreational fishers further contributed more than $1 billion to the Gulf economy, taking 22 million fishing trips.
The Gulf hypoxia research team is supported by NOAA's Center for Sponsored Coastal Ocean Research and includes scientists from the University of Michigan, Louisiana State University and the Louisiana Universities Marine Consortium. NOAA has funded investigations and forecast development for the dead zone in the Gulf of Mexico since 1990.
"While there is some uncertainty regarding the size, position and timing of this year's hypoxic zone in the Gulf, the forecast models are in overall agreement that hypoxia will be larger than we have typically seen in recent years," said NOAA Administrator Jane Lubchenco.
The actual size of the 2011 Gulf hypoxic zone will be announced
|Contact: Jim Erickson|
University of Michigan | 3.249469 |
The Americas IBA Directory
The conservation of rare birdlife has been the focus of Birdlife International for many years. In 1995 they began a project by the name of IBA, or Important Bird Area Program, to pinpoint areas across the globe that are home to endangered species, identifying the various species and protecting those areas to assist in conserving vital birdlife. At present, more than ten thousand of these areas have been identified, and conservation and environmental initiatives have been implemented. Now a new program has been established, namely the Americas IBA Directory.
Hundreds of bird species will benefit from the Americas IBA Directory, as it will be a guideline for both conservationists and for authorities. The directory covers 57 different countries and has 2 345 of the most significant areas listed that need to be protected at all costs. Authorities will be able to refer to the directory to find out which of their areas are vital to the survival of birdlife, which bird species are located in that area and the biodiversity of the area, to enable them to take the right steps in protecting the natural habitat and the birds. Some areas that have been listed are significant in the migratory patterns of certain species, while others are crucial nesting sites for numerous endangered birds. Due to a number of these areas being inhabited by local communities, also relying on the natural resources such as water, authorities can assist these communities with sustainable development that will not only benefit the communities but the birdlife as well.
Hundreds of organizations have provided support and assistance in the compiling of the Americas IBA Directory. President of Bird Studies Canada, George Finney, explained: “From breeding grounds in Canada, to wintering sites in the south, and all points in between, it is imperative that we understand what is happening to bird populations and the forces that drive change. Bird Studies Canada is proud to work closely with our international partners on this issue, so that better management decisions and conservation actions can be taken.” A large number of agencies will be working together as IBA Caretakers, tracking migratory patterns and data in regard to bird populations, to note changes being made by the birds, and keeping the IBA Directory as up to date and accurate as possible. | 3.322088 |
First ever direct measurement of the Earth’s rotation
Geodesists are pinpointing the orientation of the Earth’s axis using the world’s most stable ring laser
A group with researchers at the Technical University of Munich (TUM) and the Federal Agency for Cartography and Geodesy (BKG) are the first to plot changes in the Earth’s axis through laboratory measurements. To do this, they constructed the world’s most stable ring laser in an underground lab and used it to determine changes in the Earth’s rotation. Previously, scientists were only able to track shifts in the polar axis indirectly by monitoring fixed objects in space. Capturing the tilt of the Earth’s axis and its rotational velocity is crucial for precise positional information on Earth – and thus for the accurate functioning of modern navigation systems, for instance. The scientists’ work has been recognized an Exceptional Research Spotlight by the American Physical Society.
The Earth wobbles. Like a spinning top touched in mid-spin, its rotational axis fluctuates in relation to space. This is partly caused by gravitation from the sun and the moon. At the same time, the Earth’s rotational axis constantly changes relative to the Earth’s surface. On the one hand, this is caused by variation in atmospheric pressure, ocean loading and wind. These elements combine in an effect known as the Chandler wobble to create polar motion. Named after the scientist who discovered it, this phenomenon has a period of around 435 days. On the other hand, an event known as the “annual wobble” causes the rotational axis to move over a period of a year. This is due to the Earth’s elliptical orbit around the sun. These two effects cause the Earth’s axis to migrate irregularly along a circular path with a radius of up to six meters.
Capturing these movements is crucial to create a reliable coordinate system that can feed navigation systems or project trajectory paths in space travel. “Locating a point to the exact centimeter for global positioning is an extremely dynamic process – after all, at our latitude, we are moving at around 350 meters to the east per second,” explains Prof. Karl Ulrich Schreiber, meanwhile as station director of the geodetic observatory Wettzell where the ring laser is settled. Karl Ulrich Schreiber had directed the project in TUM’s Research Section Satellite Geodesy. The geodetic observatory Wettzell is run together by TUM and BKG.
The researchers have succeeded in corroborating the Chandler and annual wobble measurements based on the data captured by radio telescopes. They now aim to make the apparatus more accurate, enabling them to determine changes in the Earth’s rotational axis over a single day. The scientists also plan to make the ring laser capable of continuous operation so that it can run for a period of years without any deviations. “In simple terms,” concludes Schreiber, “in future, we want to be able to just pop down into the basement and find out how fast the Earth is accurately turning right now."
For more information please visit the TU München homepage http://portal.mytum.de/pressestelle/pressemitteilungen/NewsArticle_20111220_100621/newsarticle_view?. | 3.916652 |
Special thanks to our guest blogger, Chris Myers, U.S. Space and Rocket Center®, Huntsville, AL for this post
Bringing the Cosmos to Space Camp®!
At the U.S. Space and Rocket Center® and Space Camp, we are constantly looking for fun and innovative ways to teach our museum guests and trainees about space history and the science and math concepts that surround it. Naturally, we were excited to participate in the Harvard-Smithsonian Center for Astrophysics series of instructional webinars in order to get some fresh ideas and content. The creativity started to flow as we reviewed the background material, but the amount and quality of the lesson plans and information presented to us by Mary Dussault and Erin Braswell was impressive. By the end of the first hour of the webinar, we had solid ideas and lesson plans that could be implemented in every program from summer Day Camp for 5-year-olds to Advanced Space Academy® for high-school seniors. And they meet both state and national curriculum guidelines! In this case, our target subject was astronomy.
For our younger trainees, we adapted the activities that dealt with colors and filters into a hands-on component for our astronomy briefing “Tenacious Telescopes.” We use PVC pipe, colored felt and theater lighting gel in the primary colors to teach the trainees about how real telescopes like the Hubble Space Telescope use filters to look for specific information, and how scientists can put these single-color images together to make a full-color picture. In addition to making it look more like a real telescope, mounting the color filter inside a PVC pipe telescope has the added bonus of keeping our filters fingerprint and wrinkle free.
For our Advanced Academy (junior high to high school) trainees, we added an image processing component into our existing astronomy curriculum which is made up of four components. At the beginning of the week, the trainees participate in a lecture called “Exploring the Night Sky” where they learn the basics of astronomy and focus on finding and naming the constellations and deep space objects. Our second astronomy block is the “Micro Observatory Lab,” where our trainees use the Mobs software to compile full-color images of deep space objects. Our third astronomy block is a “Night Telescope” activity, where the trainees use real telescopes to find the same objects in the sky of which they compiled images the day before. And for our final astronomy block, our Advanced Academy trainees learn the stories behind selected constellations in our inflatable Star Lab.
We have been running the “Micro Observatory Lab” astronomy block since December, 2011, and have had more than 1,500 trainees from all over the world participate. We have so many students participating that we aren’t able to display all their artwork at once, so we have set up two small rotating exhibits of 12 featured photos each here at the U.S. Space and Rocket Center, one located in the Main Museum and the other located in the Science Lab used for our summer Space Academy for Educators® camp, and we plan to add a third, larger display to our computer lab this summer.
These kinds of seminars and programs are what make it so awesome to be a part of the network of Smithsonian Affiliates. Imagine all the fun, innovative and educational activities you can dream up with the help of these services! So get out there and sign up for a class today! And spare a glance for the colorful cosmos while you’re at it! | 3.141921 |
Songwriting For Beginners: ‘Just Enough’ Music TheoryBy Jeff Oxenford • Category: How To Write Songs, Songwriting Articles
(This is an article in the series “Songwriting For Beginners”. We are filing the series under the Songwriting Basics category.)
Question: How do you stop a guitarist from playing?
Answer: Put music in front of him.
That’s me. I can’t read music and I doubt I ever will. However, over the last three years, I’ve learned just enough about music theory to be dangerous. What I’ve found is that by understanding some basic concepts, I’ve been able to find that next chord I was always searching for.
The first step in understanding is that most songs are played in a single key and that the chords in the come from that key. The formula (i.e. what order) you use for the chords is what make up the song. For example, blues often uses the 1, 4, and 5 chords. If you’re playing blues in E, the chords are E, A, B (or B7). The blues progression in the key of C, uses C, F, and G.
If you can understand the table below, you’ve got the majority of theory you need.
|1 (root)||2||3||4||5||6||7||8 (root)|
|Major||minor||Minor||Major||Major or Dominant||Minor||Diminished||Major|
Here’s how to understand this table:
Guitar frets are in half (H) step intervals. In other words, moving up one fret is moving up a half (H) step. Moving up 2 frets is a whole (W) step.
Notes and intervals
On a guitar, the open string and the 12th fret on the same string are the same note (just different octave). If you look at the A string, the notes are:
|Â NOTE||A||A# or Bb||B||C||C# or Db||D||D#orEb||E||F||F#orGb||G||G#OrAb||A|
To go from A to B is a whole (W) step. To go from A to A# (or Bb) is a half (H) step.
Also, note that for B to C and E to F, there is only a half step. There is no B# (Cb) or E# (Fb).
The major scale has the following intervals, W W H W W W H. (do, rae, me fae, so la, te, do)
Applying this formula, the notes in the A major scale are: A, B, C#, D, E, F#, G#, A. As seen on the guitar the A scale looks like:
Practice tip – On any string of the guitar, apply the formula W, W, H, W, W, W, H. In other words pick the string: Open, 2, 4, 5, 7, 9, 11, 12. You’ve just played a major scale.
Numbers for the Notes
We describe the notes in a scale by their numbers (1 – 8).
When your playing in the Key of A, A is the 1 note, B is 2 – You get the idea.
Chords in the major scale
To find chords that will work in the key of A, take the root notes from the scale and use the A chord type A from the table below:
|Major||minor||Minor||Major||Major or Dom||Minor||Diminished||Major|
The 1, 4 and 5 chord are major chords (A, D, E).
The 2, 3 and 6 chords are minor (Bm, etc.)
The 7th chord is diminished
Below is a listing of the chords in the major scale for all keys. Use the table by following a row:
Practice tip: Take one row and play the chords in order. It should like the major scale. Then try the 1, 4 and 5 chords. Move to another row and try the 1, 4 , 5. It should sound pretty familiar.
How do you use this in Songwriting?
Most songs in folk, rock and blues primarily use combinations of the 1, 4, 5 chords. The 6 and 3 are used often and sometimes the 2. The 7 chord (diminished) isn’t used as often, but it does have a very distinctive sound.
*(Other books use roman numerals, so be ready to see I, IV, V).
For example: The formula for 12 bar blues is Blues in A – the formula is 1,1,1,1,4,4,1,1,5,4,1,5 (each played for a four count).
Check out more Songwriting Basics
Republished with permission by Jeff’s Songwriting | 3.147369 |
Short Essay Questions
The 60 short essay questions listed in this section require a one to two sentence answer. They ask students to demonstrate a deeper understanding of the text. Students must describe what they've read, rather than just recall it.
Short Essay Question - Prologue, Chapters 1, 2
1. If you were the albino attacker, would you have shot Jacques as quickly? Why or why not?
2. Why is the opening of the book so suspenseful?
3. If you were Langdon and the police wanted to question you about Jacques' murder, what would you do?
4. What are Silas' motivations, based on his actions in the first couple of chapters?
Short Essay Question - Chapters 3, 4, 5
5. What is Langdon's reaction to the murder, up to the end of Chapter 5?
This section contains 1,239 words|
(approx. 5 pages at 300 words per page) | 3.064596 |
Length: 43.200 cm
Mummy of an ibis
From: Abydos, Egypt
Date: Roman Period, after 30 BC
An ibis is a kind of wading bird with a long curved beak to dig around for food in the river mud. The Egyptian god Thoth’s name means ‘he who is like the ibis’, and Thoth was often shown with a man’s body and the head of an ibis. Thoth was the scribe of the gods, god of the moon, and in charge of writing, maths and language. He also helped judge the dead (see ‘Weighing the Heart’).
In the Late Period, it became very popular to mummify animals and leave them as presents (called ‘offerings’) to the gods. Many thousands of ibis were mummified as offerings to Thoth. They were wrapped in bandages and some were put into pottery jars. This one has been wrapped in a careful pattern, and is very well preserved. | 3.493435 |
The Immune System
Because the human body provides an ideal environment for many microbes, they try to pass your skin barrier and enter. Your immune system is a bodywide network of cells, tissues, and organs that has evolved to defend you against such "foreign" invasions.The proper targets of your immune system are infectious organisms--bacteria such as these streptococci; fungi (this one happens to be Candida, the cause of yeast infections); parasites, including these worm-like microbes that cause malaria; and viruses such as this SARS virus.
Markers of Self
At the heart of the immune response is the ability to distinguish between "self" and "non-self." Every cell in your body carries the same set of distinctive surface proteins that distinguish you as "self." Normally your immune cells do not attack your own body tissues, which all carry the same pattern of self-markers; rather, your immune system coexists peaceably with your other body cells in a state known as self-tolerance.
This set of unique markers on human cells is called the major histocompatibility complex (MHC). There are two classes: MHC Class I proteins, which are on all cells, and MHC Class II proteins, which are only on certain specialized cells.
Markers of Non-Self
Any non-self substance capable of triggering an immune response is known as an antigen. An antigen can be a whole non-self cell, a bacterium, a virus, an MHC marker protein or even a portion of a protein from a foreign organism.
The distinctive markers on antigens that trigger an immune response are called epitopes. When tissues or cells from another individual enter your body carrying such antigenic non-self epitopes, your immune cells react. This explains why transplanted tissues may be rejected as foreign and why antibodies will bind to them.
Markers of Self: Major Histocompatibility Complex
Your immune cells recognize major histocompatibility complex proteins(MHC) when they distinguish between self and non-self. An MHC protein serves as a recognizable scaffold that presents pieces (peptides) of a foreign protein (antigenic) to immune cells.
An empty "foreign" MHC scaffold itself can act as an antigen when donor organs or cells are introduced into a patient's body. These MHC self-marker scaffolds are also known as a patient's "tissue type" or as human leukocyte antigens (HLA) when a patient's white blood cells are being characterized.
For example, when the immune system of a patient receiving a kidney transplant detects a non-self "tissue type," the patient's body may rally its own immune cells to attack.
Every cell in your body is covered with these MHC self-marker proteins, and--except for identical twins--individuals carry different sets. MHC marker proteins are as distinct as blood types and come in two categories--MHC Class I: humans bear 6 markers out of 200 possible variations; and MHC Class II: humans display 8 out of about 230 possibilities.
Organs of the Immune System
The organs of your immune system are positioned throughout your body.
They are called lymphoid organs because they are home to lymphocytes--the white blood cells that are key operatives of the immune system. Within these organs, the lymphocytes grow, develop, and are deployed.
Bone marrow, the soft tissue in the hollow center of bones, is the ultimate source of all blood cells, including the immune cells.
The thymus is an organ that lies behind the breastbone; lymphocytes known as T lymphocytes, or just T cells, mature there.
The spleen is a flattened organ at the upper left of the abdomen. Like the lymph nodes, the spleen contains specialized compartments where immune cells gather and confront antigens.
In addition to these organs, clumps of lymphoid tissue are found in many parts of the body, especially in the linings of the digestive tract and the airways and lungs--gateways to the body. These tissues include the tonsils, adenoids, and appendix.
The organs of your immune system are connected with one another and with other organs of the body by a network of lymphatic vessels.
Lymphocytes can travel throughout the body using the blood vessels. The cells can also travel through a system of lymphatic vessels that closely parallels the body's veins and arteries. Cells and fluids are exchanged between blood and lymphatic vessels, enabling the lymphatic system to monitor the body for invading microbes. The lymphatic vessels carry lymph, a clear fluid that bathes the body's tissues.
Small, bean-shaped lymph nodes sit along the lymphatic vessels, with clusters in the neck, armpits, abdomen, and groin. Each lymph node contains specialized compartments where immune cells congregate and encounter antigens.
Immune cells and foreign particles enter the lymph nodes via incoming lymphatic vessels or the lymph nodes' tiny blood vessels. All lymphocytes exit lymph nodes through outgoing lymphatic vessels. Once in the bloodstream, they are transported to tissues throughout the body. They patrol everywhere for foreign antigens, then gradually drift back into the lymphatic system to begin the cycle all over again.
Cells of the Immune System
Cells destined to become immune cells, like all blood cells, arise in your body's bone marrow from stem cells. Some develop into myeloid progenitor cells while others become lymphoid progenitor cells.
The myeloid progenitors develop into the cells that respond early and nonspecifically to infection. Neutrophils engulf bacteria upon contact and send out warning signals. Monocytes turn into macrophages in body tissues and gobble up foreign invaders. Granule-containing cells such as eosinophils attack parasites, while basophils release granules containing histamine and other allergy-related molecules.
Lymphoid precursors develop into the small white blood cells called lymphocytes. Lymphocytes respond later in infection. They mount a more specifically tailored attack after antigen-presenting cells such as dendritic cells (or macrophages) display their catch in the form of antigen fragments. The B cell turns into a plasma cell that produces and releases into the bloodstream thousands of specific antibodies. The T cells coordinate the entire immune response and eliminate the viruses hiding in infected cells.
B cells work chiefly by secreting soluble substances known as antibodies. They mill around a lymph node, waiting for a macrophage to bring an antigen or for an invader such as a bacteria to arrive. When an antigen-specific antibody on a B cell matches up with an antigen, a remarkable transformation occurs.
The antigen binds to the antibody receptor, the B cell engulfs it, and, after a special helper T cell joins the action, the B cell becomes a large plasma cell factory that produces identical copies of specific antibody molecules at an astonishing pace--up to 10 million copies an hour.
Each antibody is made up of two identical heavy chains and two identical light chains, shaped to form a Y.
The sections that make up the tips of the Y's arms vary greatly from one antibody to another; this is called the variable region. It is these unique contours in the antigen-binding site that allow the antibody to recognize a matching antigen, much as a lock matches a key.
The stem of the Y links the antibody to other participants in the immune defenses. This area is identical in all antibodies of the same class--for instance, all IgEs--and is called the constant region.
Antibodies belong to a family of large protein molecules known as immunoglobulins.
Scientists have identified nine chemically distinct classes of human immunoglobulins, four kinds of IgG and two kinds of IgA, plus IgM, IgE, and IgD.
Immunoglobulins G, D, and E are similar in appearance. IgG, the major immunoglobulin in the blood, is also able to enter tissue spaces; it works efficiently to coat microorganisms, speeding their destruction by other cells in the immune system. IgD is almost exclusively found inserted into the membrane of B cells, where it somehow regulates the cell's activation. IgE is normally present in only trace amounts, but it is responsible for the symptoms of allergy.
IgA--a doublet--guards the entrance to the body. It concentrates in body fluids such as tears, saliva, and secretions of the respiratory and gastrointestinal tracts.
IgM usually combines in star-shaped clusters. It tends to remain in the bloodstream, where it is very effective in killing bacteria.
Scientists long wondered how all the genetic information needed to make millions of different antibodies could fit in a limited number of genes.
The answer is that antibody genes are spliced together from widely scattered bits of DNA located in two different chromosomes. Each antibody molecule is made up of two separate chains, a heavy chain and a light chain. The heavy chain is where the binding of antigens occurs, so much genetic variation is involved in its assembly. For example, to form a heavy chain, 1 of 400 possible variable gene segments (V) combines with 1 out of 15 diversity segments (D) and 1 out of 4 joining (J) segments. This makes 24,000 possible combinations for the DNA encoding the heavy chain alone. As this part of the gene assembles, it joins the variable coding segments with those for the constant-C segments of the heavy-chain molecule.
T cells contribute to your immune defenses in two major ways. Some help regulate the complex workings of the overall immune response, while others are cytotoxic and directly contact infected cells and destroy them.
Chief among the regulatory T cells are helper T cells. They are needed to activate many immune cells, including B cells and other T cells.
Cytotoxic T cells (sometimes called killer T cells) help rid your body of cells that have been infected by viruses as well as cells that have been transformed by cancer but have not yet adapted to evade the immune detection system. They are also responsible for the rejection of tissue and organ grafts.
Cytokines are diverse and potent chemical messengers secreted by the cells of your immune system. They are the chief communication signals of your T cells. Cytokines include interleukins, growth factors, and interferons.
Lymphocytes, including both T cells and B cells, secrete cytokines called lymphokines, while the cytokines of monocytes and macrophages are dubbed monokines. Many of these cytokines are also known as interleukins because they serve as a messenger between white cells, or leukocytes.
Interferons are naturally occurring cytokines that may boost the immune system's ability to recognize cancer as a foreign invader.
Binding to specific receptors on target cells, cytokines recruit many other cells and substances to the field of action. Cytokines encourage cell growth, promote cell activation, direct cellular traffic, and destroy target cells--including cancer cells.
When cytokines attract specific cell types to an area, they are called chemokines. These are released at the site of injury or infection and call other immune cells to the region to help repair damage and defend against infection.
Killer Cells: Cytotoxic Ts and NKs
At least two types of lymphocytes are killer cells--cytotoxic T cells and natural killer cells. Both types contain granules filled with potent chemicals. Both types kill on contact. They bind their targets, aim their weapons, and deliver bursts of lethal chemicals.
To attack, cytotoxic T cells need to recognize a specific antigen bound to self-MHC markers, whereas natural killer (NK) cells will recognize and attack cells lacking these. This gives NK cells the potential to attack many types of foreign cells.
Phagocytes and Their Relatives
Some immune cells have more than one name. For example, the name "phagocytes" is given to the large immune cells that can engulf and digest foreign invaders, and the name "granulocytes" refers to immune cells that carry granules laden with killer chemicals.
Phagocytes include monocytes, which circulate in the blood; macrophages, which are found in tissues throughout the body; dendritic cells, which are more stationary, monitoring their environment from one spot such as the skin; and neutrophils, cells that circulate in the blood but move into tissues when they are needed.
Macrophages are versatile cells; besides acting as phagocytic scavengers, they secrete a wide variety of signaling cytokines (called monokines) that are vital to the immune response.
Neutrophils are both phagocytes and granulocytes: they contain granules filled with potent chemicals. These chemicals, in addition to destroying microorganisms, play a key role in acute inflammatory reactions. Other types of granulocytes are eosinophils and basophils, which degranulate by spraying their chemicals onto harmful cells or microbes. The mast cell is a twin of the basophil, except it is not a blood cell. Rather, it is responsible for allergy symptoms in the lungs, skin, and linings of the nose and intestinal tract.
A related structure, the blood platelet, is a cell fragment. Platelets, too, contain granules. They promote blood clotting and wound repair, and activate some immune defenses.
Phagocytes in the Body
If foreign invaders succeed in getting past your skin barriers and manage to reach body tissues, they are usually recognized, ingested, and killed by phagocytes strategically positioned throughout the body. Macrophages and neutrophils are the main phagocytes involved, with macrophages as the first line of defense. Monocytes stop circulating in the blood and mature into specialized macrophages that migrate into the tissues of the body and prepare for invasion. Large numbers of mature macrophages reside in connective tissue, along the digestive tract, in the lungs, in the spleen, and even along certain blood vessels in the liver, where they are known as Kupffer cells.
Neutrophils are short-lived immune cells that remain circulating in the blood. When tissue-based macrophages encounter an invader, neutrophils soon reinforce their immune response by coming to the site of infection in large numbers.
The complement system consists of a series of about 25 proteins that work to "complement" the work of antibodies in destroying bacteria. Complement also helps rid the body of antigen-antibody complexes. Complement proteins are the culprits that cause blood vessels to become dilated and leaky, causing redness and swelling during an inflammatory response.
Complement proteins circulate in the blood in an inactive form. The so-called "complement cascade" is set off when the first complement molecule, C1, encounters antibody bound to antigen in an antigen-antibody complex. Each of the complement proteins performs its specialized job, acting, in turn, on the molecule next in line. The end product is a cylinder that punctures the cell membrane and, by allowing fluids and molecules to flow in and out, dooms the target cell.
Mounting an Immune Response
Microbes attempting to get into your body must first get past your skin and mucous membranes, which not only pose a physical barrier but are rich in scavenger cells and IgA antibodies.
Next, they must elude a series of nonspecific defenses--and substances that attack all invaders regardless of the epitopes they carry. These include patrolling phagocytes, granulocytes, NK cells, and complement.
Infectious agents that get past these nonspecific barriers must finally confront specific weapons tailored just for them. These include both antibodies and cytotoxic T cells.
Both B cells and T cells carry customized receptor molecules that allow them to recognize and respond to their specific targets.
The B cell's antigen-specific receptor that sits on its outer surface is also a sample of the antibody it is prepared to manufacture; this antibody-receptor recognizes antigen in its natural state.
The T cell's receptor systems are more complex. T cells can recognize an antigen only after the antigen is processed and presented in combination with a special type of major histocompatibility complex (MHC) marker. Killer T cells only recognize antigens in the grasp of Class I MHC markers, while helper T cells only recognize antigens in the grasp of Class II MHC markers. This complicated arrangement assures that T cells act only on precise targets and at close range.
Activation of B Cells to Make Antibody
The B cell uses its antibody-receptor to bind a matching antigen, which it then engulfs and processes. This triggers the B cell to become a large plasma cell producing millions of copies of the same specific antibody. These antibodies then circulate in the bloodstream in search of more matching antigens. B cell antibodies cannot themselves kill an invading organism, but they can use their antibodies to mark invaders for destruction by other immune cells and by complement.
Activation of T Cells: Helper
Helper T cells only recognize antigen in the grasp of Class II MHC markers. An antigen-presenting cell--such as a macrophage or a dendritic cell--breaks down the antigen it devours, then it places small pieces (peptides) on its surface along with a Class II MHC marker. By exhibiting its catch in this way, antigen-presenting cells enable specific receptors on helper T cells to bind the antigen and confirm (via CD4 protein) that an invasion has occurred.
After binding, a resting helper T cell quickly becomes an activated helper T. It assumes command of the immune response, giving orders to increase the number of specific antibody-producing plasma cells and the cytotoxic killer cells needed to quell the attack.
Activation of T Cells: Cytotoxic
Killer T cells only recognize antigen in the grasp of Class I MHC markers. Here a resting cytotoxic T cell recognizes virus fragments, which are displayed by a macrophage in combination with a Class I MHC marker. A receptor on a circulating, resting cytotoxic T cell (and CD8 protein) recognizes the antigen-protein complex and binds to it. The binding process and an activated helper T cell activate the cytotoxic T cell. Because the surfaces of other infected cells bear the same virus fragments in combination with Class I MHC markers, the activated cytotoxic T cells can quickly recognize, attack, and destroy the diseased cell.
Regulatory T Cells
Your immune system also has a braking mechanism, a checkpoint to prevent immune responses to self. Without this checkpoint, autoimmune disease could flourish. An additional type of immune cells--regulatory T cells--are these critical braking agents.
Researchers don't yet know exactly how regulatory T cells operate. Some think these T cells recognize and compete for the same antigens as those that activate helper and cytotoxic T cells, but that regulatory T cells zero in on a different epitope. Another possibility is that cytotoxic or helper T cells only multiply when regulatory T cells are absent.
Regulatory T cells have become important to researchers who are trying to increase the efficacy of vaccines for cancer and AIDS. In addition to increasing the antigenicity of the immunizing element, a better understanding of regulatory T cells will permit scientists to reduce the immune system's brake activity, which often limits the effectiveness of vaccines.
Immunity: Active and Passive
Whenever T cells and B cells are activated, some become "memory" cells. The next time that an individual encounters that same antigen, the immune system is primed to destroy it quickly. This is active immunity because the body's immune system prepares itself for future challenges. Long-term active immunity can be naturally acquired by infection or artificially acquired by vaccines made from infectious agents that have been inactivated or, more commonly, from minute portions of the microbe.
Short-term passive immunity can be transferred artificially from one individual to another via antibody-rich serum; similarly, a mother enables an infant to naturally acquire protection while growing within her by donating her antibodies and certain immune cells. This is passive immunity because the infant who is protected does not produce antibodies, but borrows them.
Disorders of the Immune System: Allergy
When your immune system malfunctions, it can unleash a torrent of disorders and diseases.
One of the most familiar is allergy. Allergies such as hay fever and hives are related to the antibody known as IgE. The first time an allergy-prone person is exposed to an allergen--for instance, grass pollen--the individual's B cells make large amounts of grass pollen IgE antibody. These IgE molecules attach to granule-containing cells known as mast cells, which are plentiful in the lungs, skin, tongue, and linings of the nose and gastrointestinal tract. The next time that person encounters grass pollen, the IgE-primed mast cell releases powerful chemicals that cause the wheezing, sneezing, and other symptoms of allergy.
Disorders of the Immune System: Autoimmune Disease
Sometimes the immune system's recognition apparatus breaks down, and the body begins to manufacture antibodies and T cells directed against the body's own cells and organs.
Such cells and autoantibodies, as they are known, contribute to many diseases. For instance, T cells that attack pancreas cells contribute to diabetes, while an autoantibody known as rheumatoid factor is common in persons with rheumatoid arthritis.
Disorders of the Immune System: Immune Complex Disease
Immune complexes are clusters of interlocking antigens and antibodies.
Normally they are rapidly removed from the bloodstream. In some circumstances, however, they continue to circulate, and eventually they become trapped in, and damage, the tissues of the kidneys, as seen here, or the lungs, skin, joints, or blood vessels.
Disorders of the Immune System: AIDS
When the immune system is lacking one or more of its components, the result is an immunodeficiency disorder.
These disorders can be inherited, acquired through infection, or produced as an inadvertent side effect of drugs such as those used to treat cancer or transplant patients.
AIDS is an immunodeficiency disorder caused by a virus that destroys helper T cells. The virus copies itself incessantly and invades helper T cells and macrophages, the very cells needed to organize an immune defense. The AIDS virus splices its DNA into the DNA of the cell it infects; the cell is thereafter directed to churn out new viruses.
Human Tissue Typing for Transplants
Although MHC proteins are required for T cell responses against foreign invaders, they can pose difficulty during transplantation. Every cell in the body is covered with MHC self-markers, and each person bears a slightly unique set. If a T lymphocyte recognizes a non-self MHC scaffold, it will rally immune cells to destroy the cell that bears it. For successful organ or blood stem cell transplantations, doctors must pair organ recipients with donors whose MHC sets match as closely as possible. Otherwise, the recipient's T cells will likely attack the transplant, leading to graft rejection.
To find good matches, tissue typing is usually done on white blood cells, or leukocytes. In this case, the MHC-self-markers are called human leukocyte antigens, or HLA. Each cell has a double set of six major HLA markers, HLA-A, B, and C, and three types of HLA-D. Since each of these antigens exists, in different individuals, in as many as 20 varieties, the number of possible HLA types is about 10,000. The genes that encode the HLA antigens are located on chromosome 6.
A child in the womb carries foreign antigens from the father as well as immunologically compatible self-antigens from the mother.
One might expect this condition to trigger a graft rejection, but it does not because the uterus is an "immunologically privileged" site where immune responses are somehow subdued.
Immunity and Cancer
When normal cells turn into cancer cells, some of the antigens on their surface change. These cells, like many body cells, constantly shed bits of protein from their surface into the circulatory system. Often, tumor antigens are among the shed proteins.
These shed antigens prompt action from immune defenders, including cytotoxic T cells, natural killer cells, and macrophages. According to one theory, patrolling cells of the immune system provide continuous bodywide surveillance, catching and eliminating cells that undergo malignant transformation. Tumors develop when this immune surveillance breaks down or is overwhelmed.
A new approach to cancer therapy uses antibodies that have been specially made to recognize specific cancers.
When coupled with natural toxins, drugs, or radioactive substances, the antibodies seek out their target cancer cells and deliver their lethal load. Alternatively, toxins can be linked to a lymphokine and routed to cells equipped with receptors for the lymphokine.
Dendritic Cells That Attack Cancer
Another approach to cancer therapy takes advantage of the normal role of the dendritic cell as an immune educator. Dendritic cells grab antigens from viruses, bacteria, or other organisms and wave them at T cells to recruit their help in an initial T cell immune response. This works well against foreign cells that enter the body, but cancer cells often evade the self/non-self detection system. By modifying dendritic cells, researchers are able to trigger a special kind of autoimmune response that includes a T cell attack of the cancer cells. Because a cancer antigen alone is not enough to rally the immune troops, scientists first fuse a cytokine to a tumor antigen with the hope that this will send a strong antigenic signal. Next, they grow a patient's dendritic cells in the incubator and let them take up this fused cytokine-tumor antigen. This enables the dendritic cells to mature and eventually display the same tumor antigens as appear on the patient's cancer cells. When these special mature dendritic cells are given back to the patient, they wave their newly acquired tumor antigens at the patient's immune system, and those T cells that can respond mount an attack on the patient's cancer cells.
The Immune System and the Nervous System
Biological links between the immune system and the central nervous system exist at several levels.
Hormones and other chemicals such as neuropeptides, which convey messages among nerve cells, have been found also to "speak" to cells of the immune system--and some immune cells even manufacture typical neuropeptides. In addition, networks of nerve fibers have been found to connect directly to the lymphoid organs.
The picture that is emerging is of closely interlocked systems facilitating a two-way flow of information. Immune cells, it has been suggested, may function in a sensory capacity, detecting the arrival of foreign invaders and relaying chemical signals to alert the brain. The brain, for its part, may send signals that guide the traffic of cells through the lymphoid organs.
A hybridoma is a hybrid cell produced by injecting a specific antigen into a mouse, collecting an antibody-producing cell from the mouse's spleen, and fusing it with a long-lived cancerous immune cell called a myeloma cell. Individual hybridoma cells are cloned and tested to find those that produce the desired antibody. Their many identical daughter clones will secrete, over a long period of time, millions of identical copies of made-to-order "monoclonal" antibodies.
Thanks to hybridoma technology, scientists are now able to make large quantities of specific antibodies.
Genetic engineering allows scientists to pluck genes--segments of DNA--from one type of organism and to combine them with genes of a second organism.
In this way, relatively simple organisms such as bacteria or yeast can be induced to make quantities of human proteins, including interferons and interleukins. They can also manufacture proteins from infectious agents, such as the hepatitis virus or the AIDS virus, for use in vaccines.
The SCID-hu Mouse
The SCID mouse, which lacks a functioning immune system of its own, is helpless to fight infection or reject transplanted tissue.
By transplanting immature human immune tissues and/or immune cells into these mice, scientists have created an in vivo model that promises to be of immense value in advancing our understanding of the immune system. | 3.908893 |
In the interior of central Africa the first
Catholic missions were established by Cardinal Lavigerie's White Fathers in 1879. In Uganda some
progress was made under the not unfriendly local ruler, Mtesa; but his successor, Mwanga, determined
to root out Christianity among his people, especially after a Catholic subject, St. Joseph Mkasa,
reproached him for his debauchery and for his massacre of the Protestant missionary James Hannington
and his caravan. Mwanga was addicted to unnatural vice and his anger against Christianity, already
kindled by ambitious officers who played on his fears, was kept alight by the refusal of Christian
boys in his service to minister to his wickedness.
himself was the first victim: Mwanga. seized on a trifling pretext and on November 15, 1885, had him
beheaded. To the chieftain's astonishment the Christians were not cowed by this sudden outrage, and
in May of the following year the storm burst. When he called for a young 'page' called Mwafu, Mwanga
learned that he had been receiving religious instruction from another page, St. Denis Sebuggwawo;
Denis was sent for, and the king thrust a spear through his throat. That night guards were posted
round the royal residence to prevent anyone from escaping.
Charles Lwanga, who had succeeded Joseph Mkasa in charge of the 'pages', secretly baptized four of
them who were catechumens; among them St Kizito, a boy of thirteen whom Lwanga had repeatedly saved
from the designs of the king. Next morning the pages were all drawn up before Mwanga, and Christians
were ordered to separate themselves from the rest: led by Lwanga and Kizito, the oldest and
youngest, they did so—fifteen young men, all under twenty-five years of age. They were joined by two
others already under arrest and by two soldiers. Mwanga asked them if they intended to remain
Christians. "Till death!" came the response. "Then put them to death!"
The appointed place of execution, Namugongo, was thirty-seven miles away, and the convoy set out at once. Three of the youths were killed on the road; the others underwent a cruel imprisonment of seven days at Namugongo while a huge pyre was prepared. Then on Ascension day, June 3, 1886, they were brought out, stripped of their clothing, bound, and each wrapped in a mat of reed: the living faggots were laid on the pyre (one boy, St Mbaga, was first killed by a blow on the neck by order of his father who was the chief executioner), and it was set alight.
The persecution spread and Protestants as well
as Catholics gave their lives rather than deny Christ. A leader among the confessors was St Matthias
Murumba, who was put to death with revolting cruelty; he was a middle-aged man, assistant judge to
the provincial chief, who first heard of Jesus Christ from Protestant missionaries and later was
baptized by Father Livinhac, W.F. Another older victim, who was beheaded, was St Andrew Kagwa, chief
of Kigowa, who had been the instrument of his wife's conversion and had gathered a large body of
catechumens round him. This Andrew together with Charles Lwanga and Matthias Murumba and nineteen
others (seventeen of the total being young royal servants) were solemnly beatified in 1920. They
were canonized in 1964.
When the White Fathers were expelled from the country, the new Christians carried on their work, translating and printing the catechism into their nativel language and giving secret instruction on the faith. Without priests, liturgy, and sacraments their faith, intelligence, courage, and wisdom kept the Catholic Church alive and growing in Uganda. When the White Fathers returned after King Mwanga's death, they found five hundred Christians and one thousand catchumens waiting for them.
Join the new media evangelization. Your tax-deductible gift allows Catholic.net to build a culture of life in our nation and throughout the world. Please help us promote the Church's new evangelization by donating to Catholic.net right now. God bless you for your generosity.
|Print Article||Email Friend||Palm Download||Forums||Questions||More in this Channel||Up|
Write a comment on this article| | 3.27907 |
Computer Models How Buds Grow Into Leaves
Posted on March 02, 2012 at 08:24:51 am
"A bud does not grow in all directions at the same rate," said Samantha Fox from the John Innes Centre on Norwich Research Park. "Otherwise leaves would be domed like a bud, not flat with a pointed tip."
By creating a computer model to grow a virtual leaf, the BBSRC-funded scientists managed to discover simple rules of leaf growth.
-ADVERTISEMENT-Similar to the way a compass works, plant cells have an inbuilt orientation system. Instead of a magnetic field, the cells have molecular signals to guide the axis on which they grow. As plant tissues deform during growth, the orientation and axis changes.
The molecular signals become patterned from an early stage within the bud, helping the leaf shape to emerge.
The researchers filmed a growing Arabidopsis leaf, a relative of oil seed rape, to help create a model which could simulate the growing process. They were able to film individual cells and track them as the plant grew.
It was also important to unpick the workings behind the visual changes and to test them in normal and mutant plants.
"The model is not just based on drawings of leaf shape at different stages," said Professor Enrico Coen. "To accurately recreate dynamic growth from bud to leaf, we had to establish the mathematical rules governing how leaf shapes are formed."
With this knowledge programmed into the model, developed in collaboration with Professor Andrew Bangham's team at the University of East Anglia, it can run independently to build a virtual but realistic leaf.
Professor Douglas Kell, Chief Executive of BBSRC said: "This exciting research highlights the potential of using computer and mathematical models for biological research to help us tackle complex questions and make predictions for the future. Computational modelling can give us a deeper and more rapid understanding of the biological systems that are vital to life on earth."
The model could now be used to help identify the genes that control leaf shape and whether different genes are behind different shapes.
"This simple model could account for the basic development and growth of all leaf shapes," said Fox. "The more we understand about how plants grow, the better we can prepare for our future -- providing food, fuel and preserving diversity." | 3.609688 |
Irish Druids And Old Irish
EARLY RELIGIONS OF THE
One of the most philosophical
statements from Max Müller is to this effect:
"Whatever we know of early religion, we always see that it presupposes vast
periods of an earlier development." This is exhibited in the history of all
peoples that have progressed in civilization, though we may have to travel far
back on the track of history to notice transformations of thought or belief.
When the late Dr. Birch told us that a pyramid, several hundreds of years older
than the Great Pyramid, contained the name of Osiris, we knew that at
least the Osirian part of Egyptian mythology was honoured some six or seven
thousand years ago What the earlier development of religion there was, or how
the conception of a dying and risen Osiris arose, at so remote a period, may
well excite our wonder.
Professor Jebb writes--"There was a time when they (early man)
began to speak of the natural powers as persons, and yet had not forgotten that
they were really natural, powers, and that the persons' names were merely signs?
Yet this goes on the assumption that religion--or rather dogmas thereof--sprang
from reflections upon natural phenomena. In this way, the French author of Sirius
satisfied himself, particularly on philological grounds, that the idea, of God
sprang from an association with thunder and the barking of a dog.
We are assured by Max Müller, that religion is a word that has changed from
century to century, and that "the word rose to the surface thousands of
years ago." Taking religion to imply an inward feeling of reverence
toward the unseen, and a desire to act in obedience to the inward law of right,
religion has existed as long as humanity itself. What is commonly assumed by the
word religion, by writers in general, is dogma or belief.
The importance of this subject was well put forth by the great Sanscrit
scholar in the phrase, "The real history of man is the history of
religion." This conviction lends interest and weight to any investigations
into the ancient religion of Ireland; though Plowden held that" few
histories are so charged with fables as the annals of Ireland."
It was Herder who finely said, "Our earth owes the seeds of all higher
culture to a religious tradition, whether literary or oral." In proportion
as the so-called supernatural gained an ascendancy, so was man really advancing
from the materialism and brutishness of savagedom. Lecky notes "the
disposition of man in certain stages of society towards the miraculous."
But was Buckle quite correct in maintaining that "all nature conspired to
increase the authority of the imaginative faculties, and weaken the authority of
the reasoning ones"?
It is not to be forgotten in our inquiry that, as faiths rose in the East,
science has exerted its force in the West.
Fetishism can hardly be regarded as the origin of religion. As to those
writers who see in the former the deification of natural objects, Max Müller
remarks, "They might as well speak of primitive men mummifying their dead
bodies Before they had wax to embalm them with."
Myth has been styled the basis of religion not less than of history; but how
was it begotten?
Butler, in English, Irish, and Scottish Churches, writes--
"To separate the fabulous from the probable, and the probable from the
true, will require no ordinary share of penetration and persevering
industry." We have certainly to remember, as one has said, that
"mythic history, mythic theology, mythic science, are alike records, not of
facts, but beliefs." Andrew Lang properly calls our attention to language,
as embodying thought,, being so liable to misconception and misinterpretation.
Names, connected with myths, have been so variously read and explained by
scholars, that outsiders may well be puzzled.
How rapidly a myth grows, and is greedily accepted, because of the wish it
may be true, is exemplified in the pretty story, immortalized by music, of
Jessie of Lucknow, who, in the siege, heard her deliverers, in the remote
distance, playing "The Campbells are coming." There never was,
however, a Jessie Brown there at that time; and, as one adds, Jessie has herself
"been sent to join William Tell and the other dethroned gods and
In the Hibbert Lectures, Professor Rhys observes, "The Greek
myth, which distressed the thoughtful and pious minds, like that of Socrates,
was a survival, like the other scandalous tales about the gods, from the time
when the ancestors of the Greeks were savages." May it not rather have been
derived by Homer, through the trading Phœnicians, from the older mythologies of
India and Egypt, with altered names and scenes to suit the poet's day and clime?
It would scarcely do to say with Thierry, "In legend alone rests real
history--for legend is living tradition, and three times out of four it is truer
than what we call History." According to Froude, "Legends grew as
nursery tales grow now.--There is reason to believe that religious theogonies
and heroic tales of every nation that has left a record of itself, are but
practical accounts of the first impressions produced upon mankind by the phenomena of day and night, morning
and evening, winter and summer."
Such may be a partial explanation; but it may be also assumed that they were
placed on record by the scientific holders of esoteric wisdom, as problems or
studies for elucidation by disciples.
The anthropological works of Sir John Lubbock and Dr. Tylor can be consulted
with profit upon this subject of primitive religious thought.
Hayes O'Grady brings us back to Ireland, saying, "Who shall thoroughly
discern the truth from the fiction with which it is everywhere entwined, and in
many places altogether overlaid?--There was at one time a vast amount of zeal,
ingenuity, and research expended on the elucidation and confirming of these
fables; which, if properly applied, would have done Irish history and
archaeology good service, instead of making their very names synonymous among
strangers with fancy and delusion."
After this we can proceed with the Irish legends and myths, the introduction
to this inquiry being a direction to the current superstitions of the race. | 3.201722 |
You have to like the attitude of Thomas Henning (Max-Planck-Institut für Astronomie). The scientist is a member of a team of astronomers whose recent work on planet formation around TW Hydrae was announced this afternoon. Their work used data from ESA’s Herschel space observatory, which has the sensitivity at the needed wavelengths for scanning TW Hydrae’s protoplanetary disk, along with the capability of taking spectra for the telltale molecules they were looking for. But getting observing time on a mission like Herschel is not easy and funding committees expect results, a fact that didn’t daunt the researcher. Says Henning, “If there’s no chance your project can fail, you’re probably not doing very interesting science. TW Hydrae is a good example of how a calculated scientific gamble can pay off.”
I would guess the relevant powers that be are happy with this team’s gamble. The situation is this: TW Hydrae is a young star of about 0.6 Solar masses some 176 light years away. The proximity is significant: This is the closest protoplanetary disk to Earth with strong gas emission lines, some two and a half times closer than the next possible subjects, and thus intensely studied for the insights it offers into planet formation. Out of the dense gas and dust here we can assume that tiny grains of ice and dust are aggregating into larger objects and one day planets.
Image: Artist’s impression of the gas and dust disk around the young star TW Hydrae. New measurements using the Herschel space telescope have shown that the mass of the disk is greater than previously thought. Credit: Axel M. Quetz (MPIA).
The challenge of TW Hydrae, though, has been that the total mass of the molecular hydrogen gas in its disk has remained unclear, leaving us without a good idea of the particulars of how this infant system might produce planets. Molecular hydrogen does not emit detectable radiation, while basing a mass estimate on carbon monoxide is hampered by the opacity of the disk. For that matter, basing a mass estimate on the thermal emissions of dust grains forces astronomers to make guesses about the opacity of the dust, so that we’re left with uncertainty — mass values have been estimated anywhere between 0.5 and 63 Jupiter masses, and that’s a lot of play.
Error bars like these have left us guessing about the properties of this disk. The new work takes a different tack. While hydrogen molecules don’t emit measurable radiation, those hydrogen molecules that contain a deuterium atom, in which the atomic nucleus contains not just a proton but an additional neutron, emit significant amounts of radiation, with an intensity that depends upon the temperature of the gas. Because the ratio of deuterium to hydrogen is relatively constant near the Sun, a detection of hydrogen deuteride can be multiplied out to produce a solid estimate of the amount of molecular hydrogen in the disk.
The Herschel data allow the astronomers to set a lower limit for the disk mass at 52 Jupiter masses, the most useful part of this being that this estimate has an uncertainty ten times lower than the previous results. A disk this massive should be able to produce a planetary system larger than the Solar System, which scientists believe was produced by a much lighter disk. When Henning spoke about taking risks, he doubtless referred to the fact that this was only the second time hydrogen deuteride has been detected outside the Solar System. The pitch to the Herschel committee had to be persuasive to get them to sign off on so tricky a detection.
But 36 Herschel observations (with a total exposure time of almost seven hours) allowed the team to find the hydrogen deuteride they were looking for in the far-infrared. Water vapor in the atmosphere absorbs this kind of radiation, which is why a space-based detection is the only reasonable choice, although the team evidently considered the flying observatory SOFIA, a platform on which they were unlikely to get approval given the problematic nature of the observation. Now we have much better insight into a budding planetary system that is taking the same route our own system did over four billion years ago. What further gains this will help us achieve in testing current models of planet formation will be played out in coming years.
The paper is Bergin et al., “An Old Disk That Can Still Form a Planetary System,” Nature 493 ((31 January 2013), pp. 644–646 (preprint). Be aware as well of Hogerheijde et al., “Detection of the Water Reservoir in a Forming Planetary System,” Science 6054 (2011), p. 338. The latter, many of whose co-authors also worked on the Bergin paper, used Herschel data to detect cold water vapor in the TW Hydrae disk, with this result:
Our Herschel detection of cold water vapor in the outer disk of TW Hya demonstrates the presence of a considerable reservoir of water ice in this protoplanetary disk, sufficient to form several thousand Earth oceans worth of icy bodies. Our observations only directly trace the tip of the iceberg of 0.005 Earth oceans in the form of water vapor.
Clearly, TW Hydrae has much to teach us.
Addendum: This JPL news release notes that although a young star, TW Hydrae had been thought to be past the stage of making giant planets:
“We didn’t expect to see so much gas around this star,” said Edwin Bergin of the University of Michigan in Ann Arbor. Bergin led the new study appearing in the journal Nature. “Typically stars of this age have cleared out their surrounding material, but this star still has enough mass to make the equivalent of 50 Jupiters,” Bergin said. | 3.503813 |
1.True / false: A violin string can only produce a single frequency unless its tuning is change.
2.True / false: Most of the sound from a violin comes directly from the strings.
3. As a violin string vibrates, its motional energy is changing rapidly with time.
What 2 types of energy are involved in the motion of an oscillating string such as a violin string (ignore friction and changes in GPE): | 3.03742 |
Consider four vectors ~ F1, ~ F2, ~ F3, and ~ F4, wheretheir
magnitudes are F1= 43 N, F2= 36 N, F3 = 19 N, andF4 = 54 N.Let
θ1 =120o, θ2 =
−130o,θ3 = 200, and θ4 =
−67o, measured from thepositive x axis with
the counter-clockwiseangular direction aspositive.
What is the magnitudeof the resultant vector ~F , where ~F = ~
F1 +~ F2 +~ F3 +~ F4? Answer in units of N. What is the direction
ofthis resultant vector~F?
Note: Give the anglein degrees, use counterclockwise as the
positiveangular direction, between the limits
from the positive
xaxis. Answer in units ofo
I worked out the first part of thequestion by using
trigonomic rules. My X value=-5.68671and my Y
value=-33.5474. The magnitude came out to 34.026N. I tried
finding the direction by usingθ=tan-1(y/x) but i
cant get the rightanswer. | 3.354777 |
A common virus that affects 50-70% of adults. If a woman acquires cytomegalovirus (CMV) infection during pregnancy, there is about a 15% chance that her infant will have infection and serious complications. Women who have had CMV infection and who are considering breastfeeding their prematurely born infant should check first with their child's doctor since there is a risk of transmitting the virus to the infant through breast milk. Prematurely born infants may not be able to fight off the CMV infection as do infants born at term. CMV infection is usually a mild infection in adults. Infants born to women who have had CMV long before they became pregnant are at low risk of having an infant with serious CMV infection. | 3.004883 |
The cardinals were deadlocked. They had been deadlocked for 27 months, since 1292 when Pope Nicholas V died. There were only twelve cardinals and they were evenly divided between two factions of the Roman nobility. Neither side would give way. Each hoped for the perks that would accrue from having one of their number named pope.
And then a message arrived from the mountains. Peter Murrone, the hermit founder of the Celestines, a strict branch of Benedictines, warned that God was angry with the cardinals. If they did not elect a pope within four months, the Lord would severely chastise the church.
Eager for a way out of their deadlock, the cardinals asked themselves, why not elect Peter himself? Finally the cardinals could agree. In a vote that they declared to be "miraculous" they unanimously chose Peter.
When three of the cardinals climbed to his mountain roost to tell Peter he had been chosen, the hermit wasn't happy. All of his life, he had tried to run away from people. Dressed like John the Baptist, he subjected himself to fasts, heavy chains, and nights of prayer without sleep. But when the cardinals and his friend King Charles II of Naples insisted that he must accept the position for the good of the church, Peter reluctantly agreed.
Charles II prompted him to name a number of new cardinals--all of them from France and Naples, changing the consistency of the group which would elect future popes. Peter, who was too trusting, made many mistakes. A babe in political matters, he was used by everyone around him. The Vatican staff even sold blank bulls with his signature on them.
The business of the church slowed to a crawl because he took too much time making decisions. Within weeks it became apparent he had to resign for the good of the church. But could a pope resign? Guided by one of the cardinals, Benedetto Caetani, Celestine as pope issued a constitution which gave himself the authority to resign.
All sorts of rumors followed this resignation. Peter had built himself a hut in the Vatican where he could live like a hermit. Supposedly Caetani thrust a reed through the wall of the hut and pretended he was the voice of God ordering Celestine to resign. Since his mind was undecided as to his proper course, this trick is said to have convinced him.
Celestine stepped down on this day, December 13, 1294, having actually filled the position of pope only three months. He was replaced by Caetani who took the name Boniface VIII. Afraid that Peter would become a rallying point for troublemakers, Boniface locked the old man up. He destroyed most of the records of Celestine's short time in office, but he could not unmake the cardinals.
Peter escaped and wandered through mountains and forests. He was recognized and recaptured when he tried to sail to Greece, his boat having been driven back by a storm. The last nine months of his life he spent in prayer as a prisoner of Boniface, badly treated by his guards. When he died in 1296, rumor had it that Boniface had murdered him. He was about 81-years-old. In 1313, Pope Clement V declared him a saint.
- Brusher, Joseph Stanislaus. Popes through the Ages. Princeton, N. J.: Van Nostrand, 1959.
- "Celestine V." The Oxford Dictionary of the Christian Church, edited by F. L. Cross and E. A. Livingstone. Oxford, 1997.
- De Rosa, Peter. Vicars of Christ; the dark side of the papacy. Dublin: Poolbeg Press, 2000; especially pp.75ff.
- Loughlin, James F. "Pope St. Celestine V." The Catholic Encyclopedia. New York: Robert Appleton, 1908.
- Montor, Chevalier Artaud de. Lives and Times of the Popes. New York: Catholic Publication Society of America, 1909. Source of the picture.
- Rusten, E. Michael and Rusten, Sharon. One Year Book of Christian History. (Wheaton, IL: Tyndale House, 2003).
- Silone, Ignazio. The Story of a Humble Christian. [Dramatic account with historical addenda.] New York: Harper and Row, 1970.
- Various encyclopedia and internet articles.
Last updated July, 2007 | 3.223559 |
Interviewing Children About Past Events: Evaluating the NICHD Interview Protocol
This study, conducted by the NICHD in collaboration with Lancaster University in Lancaster, England, will evaluate the accuracy of information obtained from children using AN ADAPTED VERSION OF NICHD's interview protocol. The NICHD protocol was developed to help forensic interviewers OBTAIN INFORMATION FROM children who may be victims of or witnesses to a crime ABOUT THEIR EXPERIENCES. This study does not involve forensic interviews, but is DESIGNED TO OBTAIN INFORMATION FROM children ABOUT an event that takes place at their school. The study will examine how children report a brief interaction with an unfamiliar adult, how the memory of the event changes over time, and how the use of different interview techniques can help children give a fuller and more accurate accounts of past experiences.
Children 5 and 6 years of age who attend local schools in the Lancaster, England, area may be eligible for this study. Participants will be told that they are going to have their pictures taken and will be escorted by a researcher to a room at the school with another researcher who is posing as a photographer. The "photographer" and the child will put on a costume, such as a pirate's outfit, over their street clothes, helping each other put on pieces of the costume. The photographer will take pictures of the child in the costume. They will each take off the costumes and the child will be told that he or she will receive the photographs at a later time. Another researcher posing as a photographer will come into the room, interrupting the event, and begin to argue with the first photographer about who had booked the equipment. They will resolve the argument and apologize to the child for the interruption.
About 6 weeks after the event, the children will be interviewed using the ADAPTED VERSION OF NICHD interview protocol. Half will be interviewed first about the staged event (the photo session), followed by an interview about a fictitious event (e.g., a class visit to the fire station) that could plausibly have happened but did not. The other half of the children will be interviewed first about the fictitious event and then about the staged event. The children will be interviewed according to one of the following three procedures:
- The NICHD protocol preceded by a rapport-building phase that includes the rules of the interview and open-ended questions about the child and a recently experienced event
- The NICHD protocol preceded by a rapport-building phase that includes the rules of the interview and direct questions about the child and a recently experienced event, or
- The NICHD protocol preceded by the rules of the interview and open-ended questions about the child, but no opportunity to practice talking about a recently experienced event.
After the interviewer has elicited as much information as is likely to be gained from verbal questions, he or she will present the child with a line drawing of a gender neutral person and ask the child to indicate where the child was touched by the photographer and where the child touched the photographer. Any child who provides a report of the fictitious event will be interviewed in the same way about the fictitious event. After 1 year, the children will be interviewed again in the same manner as the 6-week interview.
The interviews will be audio- and videotaped to record the kind of information the children talk about and compare it to what actually happened in the event.
|Official Title:||Evaluating the NICHD Interview Protocol in an Analog Study|
|Study Start Date:||January 2004|
|Estimated Study Completion Date:||October 2006|
The NICHD interview protocol was designed to aid forensic interviewers in adhering to best standards of practice when interviewing children. Field studies evaluating its use have demonstrated improvements in both interviewer behavior, and the amount and quality of information obtained from children, compared to interviews conducted prior to its implementation in test sites. Because field studies were conducted in forensic settings, however, it has not been possible to evaluate the protocol's effect on the accuracy of information reported by children. This present study therefore aims to evaluate the accuracy of information obtained using the NICHD interview protocol in an analog study. In addition the study is designed to explore children's willingness to provide details of a suggested, non-experienced event, and the effectiveness of including a human figure drawing as an auxiliary technique for eliciting further information. Furthermore, we will explore the importance of the pre-substantive/rapport-building phase of interviews, and the impact this has on children's reports of experienced and suggested events. Finally, we will explore the effectiveness of the interview protocol with children when a long delay has occurred between the event and the interview.
Children will take part, individually, in a staged event at their school, and approximately six weeks later, be interviewed at the university about what they experienced. In addition, children will be asked to talk about a suggested fictitious event (one that has not happened). The order of the interviews will be counter-balanced across children and rapport-building conditions. Some children will be interviewed with an open-ended script that includes practice in episodic memory, some with a script made up of direct questions, including a practice in episodic memory, and some with one that uses open-ended questions but does not provide practice in talking about an event from episodic memory. Approximately one year later children will be interviewed again, so that we can examine children's reports in protocol interviews over a long delay. Children's reports will be analyzed for both overall amount and accuracy of information reported, as well as in response to the different cues and props given in the course of the interview. It is not anticipated that the study will pose any risks to the children involved, and we expect that both the staged event and the interviews will be enjoyable and stimulating. We expect that the results of the study will provide further support for the use of NICHD interview as a safe and effective means of interviewing children about past experiences. In addition to general information on children's eyewitness capabilities, the study is expected to supplement field studies by contributing knowledge about the accuracy of children's memory using the NICHD interview protocol.
|United States, Maryland|
|National Institute of Child Health and Human Development (NICHD)|
|Bethesda, Maryland, United States, 20892| | 3.017108 |
Growth Hormone and Endothelial Function in Children
Objective: This study is designed to determine whether growth hormone treatment in children 8 to 18 years of age alters function of the lining of the arteries. This may play a role in increasing or decreasing the risk of heart disease.
Methods. Twenty children, for whom growth hormone therapy will be otherwise provided, will be studied before and 3 months after starting growth hormone. Subjects can be on other hormonal replacements but no other medications.
Each study will be done in the fasting state. The blood vessel function will be determined by measuring the change in forearm blood flow before and after blocking flow to the arm for 5 minutes. Blood will be drawn after the test to measure glucose, insulin and fats.
Growth Hormone Deficiency
Drug: growth hormone
|Study Design:||Allocation: Non-Randomized
Endpoint Classification: Safety Study
Intervention Model: Single Group Assignment
Masking: Open Label
Primary Purpose: Treatment
|Official Title:||Growth Hormone and Endothelial Function in Children|
- Change in Reactive Hyperemic response after 3 months of growth hormone [ Time Frame: 3 months ] [ Designated as safety issue: No ]
- Glucose, Insulin, lipid measurements [ Time Frame: 3 months ] [ Designated as safety issue: No ]
|Study Start Date:||January 2005|
|Study Completion Date:||December 2007|
|Primary Completion Date:||June 2007 (Final data collection date for primary outcome measure)|
The purpose of the research is to learn more about how the lining of arteries in the body (called the endothelium) is affected by growth hormone treatment in children and adolescents. Poor function by the blood vessels is associated with increased risk of heart disease or stroke. This research is being done because growth hormone treatment has been shown to make the endothelium work better in adults. Growth hormone treatment may have the same or different effects in children because the dose is larger in children.
Children between 8 and 18 years who are to be started on growth hormone will be eligible to participate. Blood vessel function will be studied before starting growth hormone and 3 months after. This will be done by measuring blood flow to the arm before and after 5 min of stopping blood flow to the arm. The three months of growth hormone will be given free.
|United States, Ohio|
|Ohio State University|
|Columbus, Ohio, United States, 43210|
|Study Chair:||Robert P Hoffman, MD||Ohio State University| | 3.088607 |
the National Science Foundation
Available Languages: English, Spanish
This classroom-tested learning module gives a condensed, easily-understood view of the development of atomic theory from the late 19th through early 20th century. The key idea was the discovery that the atom is not an "indivisible" particle, but consists of smaller constituents: the proton, neutron, and electron. It discusses the contributions of John Dalton, J.J. Thomson, Ernest Rutherford, and James Chadwick, whose experiments revolutionized the world view of atomic structure. See Related Materials for a link to Part 2 of this series.
atomic structure, cathode ray experiment, electron, helium atom, history of atom, history of the atom, hydrogen atom, neutron, proton
Metadata instance created
July 12, 2011
by Caroline Hall
October 10, 2012
by Caroline Hall
Last Update when Cataloged:
January 1, 2006
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4D. The Structure of Matter
6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope.
9-12: 4D/H1. Atoms are made of a positively charged nucleus surrounded by negatively charged electrons. The nucleus is a tiny fraction of the volume of an atom but makes up almost all of its mass. The nucleus is composed of protons and neutrons which have roughly the same mass but differ in that protons are positively charged while neutrons have no electric charge.
9-12: 4D/H2. The number of protons in the nucleus determines what an atom's electron configuration can be and so defines the element. An atom's electron configuration, particularly the outermost electrons, determines how the atom can interact with other atoms. Atoms form bonds to other atoms by transferring or sharing electrons.
10. Historical Perspectives
10F. Understanding Fire
9-12: 10F/H1. In the late 1700s and early 1800s, the idea of atoms reemerged in response to questions about the structure of matter, the nature of fire, and the basis of chemical phenomena.
9-12: 10F/H3. In the early 1800s, British chemist and physicist John Dalton united the concepts of atoms and elements. He proposed two ideas that laid the groundwork for modern chemistry: first, that elements are formed from small, indivisible particles called atoms, which are identical for a given element but different from any other element; and second, that chemical compounds are formed from atoms by combining a definite number of each type of atom to form one molecule of the compound.
9-12: 10F/H4. Dalton figured out how the relative weights of the atoms could be determined experimentally. His idea that every substance had a unique atomic composition provided an explanation for why substances were made up of elements in specific proportions.
This resource is part of a Physics Front Topical Unit.
Topic: Particles and Interactions and the Standard Model Unit Title: History and Discovery
This classroom-tested learning module gives a condensed, easily-understood view of the development of atomic theory from the late 19th through early 20th century. The key idea was the discovery that the atom is not an "indivisible" particle, but consists of smaller constituents: the proton, neutron, and electron. It discusses the contributions of John Dalton, J.J. Thomson, Ernest Rutherford, and James Chadwick, whose experiments revolutionized the world view of atomic structure.
%0 Electronic Source %A Carpi, Anthony %D January 1, 2006 %T Visionlearning: Atomic Theory I %I Visionlearning %V 2013 %N 21 May 2013 %8 January 1, 2006 %9 text/html %U http://www.visionlearning.com/library/module_viewer.php?mid=50&l=
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | 3.425094 |
An electron is a subatomic particles of spin 1/2. It couples with photons and, thus, is electrically charged. It is a lepton with a rest mass of 9.109 * 10 − 31kg and an electric charge of − 1.602 * 10 − 19 C, which is the smallest known charge possible for an isolated particle (confined quarks have fractional charge). The electric charge of the electron e is used as a unit of charge in much of physics.
Electron pairs within an orbital system have opposite spins due to the Pauli exclusion principle; this characteristic spin pairing allows electrons to exist in the same quantum orbital, as the opposing magnetic dipole moments induced by each of the electrons ensures that they are attracted together.
Current theories consider the electron as a point particle, as no evidence for internal structure has been observed.
As a theoretical construct, electrons have been able to explain other observed phenomena, such as the shell-like structure of an atom, energy distribution around an atom, and energy beams (electron and positron beams).
- ↑ Massimi, M. (2005). Pauli's Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7–8
- ↑ Mauritsson, J.. "Electron filmed for the first time ever". Lunds Universitet. Retrieved 2008-09-17. http://www.atomic.physics.lu.se/research/attosecond_physics
- ↑ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 981-02-3500-3. | 3.850197 |
Use it or lose it? Researchers investigate the dispensability of our DNA|
October 2, 2008 Our genome contains many genes encoding proteins that are similar to those of other organisms, suggesting evolutionary relationships; however, protein-coding genes account for only a small fraction the genome, and there are many other DNA sequences that are conserved across species. What are these sequences doing, and do we really need them at all? In a study published online today in Genome Research (www.genome.org), researchers have delved into this mystery and found that evolution has actively kept them in our genome.
Before the human genome was sequenced, researchers estimated the genome might contain upwards of 140,000 protein-coding genes, but surprisingly, sequencing revealed only about 20,000, accounting for less than 2% of the entire genome. Previously, Dr. Gill Bejerano of Stanford University found that lurking within the other 98% of the genome are stretches of sequences, known as ultraconserved elements, which are identical between humans and animals such as rodents and chickens, even though hundreds of millions of years of independent evolution separates them.
Other evidence has suggested that ultraconserved sequences can harbor critical functions, such as regulation of the activity of certain genes. Yet research in this field has produced laboratory results that are seemingly in disagreement: some ultraconserved elements can be deleted from the mouse genome and produce no observable effect on mice. Bejerano cautions that laboratory experiments such as these may not be able to detect slow evolutionary forces at work. "With this in mind, we set out to examine the genomic data, much as someone would examine archaeological data, in search of similar deletion events that have happened naturally, and more importantly, were retained in the wild."
"An analogy I like to entertain is that of plate tectonics: a fraction of the phenomena may be strong enough to be directly measured by our instruments, but to appreciate its full magnitude we must dig into the geological record," said Bejerano. "This digging into the genomic record is what our current work was all about.
Bejerano and graduate student Cory McLean studied the genomes of six mammals, investigating ultraconserved elements that are shared between primates and closely related mammals, were present in the ancestor of modern rodents, but have been lost in the rodent lineage more recently. The researchers found that the genomic evidence supports an important biological role for ultraconserved elements, as well as thousands of other non-coding elements that are resistant to deletion. "The functional importance of ultraconserved elements is reinforced by the observation that the elements are rarely lost in any species," said McLean. "In fact, they are over 300-fold less likely to be lost than genomic loci which evolve neutrally in our genome."
Bejerano explained that while loss of some elements may have a significant impact on the fitness of a species and the loss of other elements might be harder to detect in the laboratory, nearly all changes to these regions are picked up by evolution and swept out of the population.
"Perhaps our most striking observation is one of sheer magnitude," Bejerano said. "Our work highlights how essential these dozens of thousands of regions are to the natural evolution of a species even as their actual functions remain, at large, a mystery."
Scientists from Stanford University (Stanford, CA) contributed to this study.
This work was supported by a Stanford Bio-X Graduate Fellowship and the Edward Mallinckrodt, Jr. Foundation.
Gill Bejerano, Ph.D. ([email protected]; +1-650-723-7666) has agreed to be contacted for more information.
Interested reporters may obtain copies of the manuscript from Peggy Calicchia, Editorial Secretary, Genome Research ([email protected]; +1-516-422-4012).
About the article:
The manuscript will be published online ahead of print on October 2, 2008. Its full citation is as follows:
McLean, C., and Bejerano, G. Dispensability of mammalian DNA. Genome Res. doi:10.1101/gr.080184.108.
About Genome Research:
Genome Research (www.genome.org) is an international, continuously published, peer-reviewed journal published by Cold Spring Harbor Laboratory Press. Launched in 1995, it is one of the five most highly cited primary research journals in genetics and genomics.
About Cold Spring Harbor Laboratory Press:
Cold Spring Harbor Laboratory Press is an internationally renowned publisher of books, journals, and electronic media, located on Long Island, New York. It is a division of Cold Spring Harbor Laboratory, an innovator in life science research and the education of scientists, students, and the public. For more information, visit www.cshlpress.com.
Genome Research issues press releases to highlight significant research studies that are published in the journal. | 3.276768 |
Federal Government Seceded From the States and the Constitution first in 1913.Submitted by realman2020 on Mon, 11/19/2012 - 01:01
In 1861. The Federal government seceded from the states and our constitution first. The southern states broke away from the union. The reason is the Federal government broke the compact or contract. The Federal government overstepped their boundaries in the Constitution. Southern States seceding had nothing to do with slavery. It had everything to do with states’ rights.
In 1913. The 16th and 17th Amendment were announced ratified without three-fourths of the states. The Federal Reserve act passed on Christmas eve in 1913. It happened in the dark of night when Congress was in recess. A handful of congressmen and senators by a voice vote passed this backdoor legislation. President Woodrow Wilson singed the bill into law. The Federal government seceded from the Constitution for the bankers.
To read more click link below | 3.594703 |
Heat is a sad fact of life for current generation
electronics. Any Android, iPhone, or BlackBerry user can tell you that
smartphones tend to get pretty hot at times. And by today's standards a
balmy 85 degrees Celsius, while hot enough to cook an egg, is a pretty
"good" operating temperature for
a high-powered PC graphics processing unit.
But that could all soon change, according to the
results of a new study by researchers at the University of Illinois.
Examining graphene transistors, a team led by mechanical science and
engineering professor William King [profile]
and electrical and computer engineering professor Eric Pop [profile] made
a remarkable discovery -- graphene appears to self-cool.
I. What is Graphene?
Graphene is somewhat like a miniature
"fence" of carbon. The material consists of a single-atom thick
layer composed of hexagonal units. At each point of the hexagon sits a
carbon atom that is bonded to its three close neighbors.
The material behaves like a semiconductor, despite
being made of organic atoms. It offers remarkable performance at an incredibly
small scale, thus the electronics industry views it as a potential material
to power electronic devices of the future.
A variety of methods exist for producing graphene.
The earliest method was an exfoliation technique that involved stripping
individual graphene layers off a layer of graphite (the material found in
pencil lead) -- this technique (as of 2008) cost as much as $100M USD to
produce a single cubic centimeter of material. However, rapid advances in
production have allowed manufacturers to begin scaling up production to the
point where tons of exfoliated graphene can now be produced.
techniques promise to drop the price even further. One
method, epitaxial growth on silicon cost $100 per cubic centimeter in 2009.
Its limitation is that, obviously, it requires silicon (eliminating some
desirable properties like flexibility). South Korean researchers have
tested another promising method, nickel metal transfer.
Graphene is fascinating from a physics
perspective. In 2005 physicists at the University of Manchester and the
Philip Kim group from Columbia University demonstrated that quasiparticles
inside graphene were massless Dirac fermions. These unusual particles help
give rise to the material's unique characteristics.
II. Graphene as a Self-Cooling Device
Despite the extreme interest in the material,
deal of mystery still surrounds Graphene. Because it is so
extremely thin, it is difficult to test and measure
accurately certain properties of the material.
Overcoming technical challenges, the University of
Illinois team used an atomic force microscope tip as a temperature probe to
make the first nanometer-scale temperature measurements of a working graphene
What they found was that the resistive heating
("waste heat") effect in graphene was weaker than its thermo-electric
cooling effect at times. This is certainly not the case in silicon or
other semiconductors where resistive heating far surpasses cooling effects.
What this means is that graphene circuits may not
get hot like traditional silicon-based ones. This could open the door to
dense 3D chips and more.
Further, as the heat is converted back into
electricity by the device, graphene transistors may have a two-fold power
efficiency gain, both in ditching energetically expensive fans and by recycling
heat losses into usable electricity.
Professor King describes, "In silicon and
most materials, the electronic heating is much larger than the self-cooling.
However, we found that in these graphene transistors, there are regions where
the thermoelectric cooling can be larger than the resistive heating, which
allows these devices to cool themselves. This self-cooling has not previously
been seen for graphene devices."
Professor Pop adds, "Graphene electronics are
still in their infancy; however, our measurements and simulations project that
thermoelectric effects will become enhanced as graphene transistor technology
and contacts improve."
A paper has been published [full
text] in nanotechnology's most prestigious peer-reviewed journal, Nature
Nanoscience. University of Illinois graduate student Kyle
undergraduate Feifei Lian and postdoctoral researcher Myung-Ho
Bae [profile] are listed as co-authors on the paper.
III. What's Next?
The study should provide even more motivation for
semiconductor manufacturing companies like Intel, GlobalFoundries, and TMSC to
lay down the process work necessary to mass-produce circuits based on graphene
transistors, capacitors, etc.
As for the University of Illinois team, they plan
to next use their new measurement technique to analyze carbon nanotubes and other
novel structures that are of interest to future electronics applications.
Their work is funded via a grant from the Air
Force Office of Scientific Research and the Office of Naval Research. | 3.812167 |
Corn crop residues are often left on harvested fields to protect soil quality, but they could become an important raw material in cellulosic ethanol production. U.S. Department of Agriculture (USDA) research indicates that soil quality would not decline if post-harvest corn cob residues were removed from fields.
This work, led by Agricultural Research Service (ARS) soil scientist Brian Wienhold, supports the USDA priority of developing new sources of bioenergy. ARS is USDA's chief intramural scientific research agency.
Wienhold, with the ARS Agroecosystem Management Research Unit in Lincoln, Neb., led studies that compared runoff rates and sediment loss from no-till corn fields where postharvest crop residues were either removed or retained. The scientists also removed cobs from half of the test plots that were protected by the residues.
After the test plots were established, the scientists generated two simulated rainfall events. The first occurred when the fields were dry, and the next occurred 24 hours later when the soils were almost completely saturated.
During the first event, on plots where residue was removed, runoff began around 200 seconds after the "rain" began. Runoff from plots protected by residues didn't start until around 240 seconds after it started to "rain."
Runoff from the residue-free plots contained 30 percent more sediment than runoff from all the residue-protected plots. But the presence or absence of cobs on the residue-protected plots did not significantly affect sediment loss rates.
Wienhold's team concluded that even though cob residues did slightly delay the onset of runoff, sediment loss rates were not significantly affected by the presence or absence of the cobs. The results indicated that the cobs could be removed from other residue and used for bioenergy feedstock without significantly interfering with the role of crop residues in protecting soils.
In a related study, Wienhold examined how the removal of cob residues affected soil nutrient levels. Over the course of a year, his sampling indicated that cobs were a source of soil potassium, but that they weren't a significant source of any other plant nutrients. | 3.551723 |
A form of carbohydrate that will raise blood glucose levels relatively quickly when ingested. The term “fast-acting carbohydrate” is generally used in discussions of treating hypoglycemia, or low blood sugar. However, as research accumulates on the subject of carbohydrates and how quickly they are absorbed, some diabetes experts say the term has become outdated.
What defines hypoglycemia varies from source to source, but it generally refers to a blood glucose level below 70 mg/dl. In many cases, this will produce the typical symptoms of low blood sugar, which include trembling, sweating, heart palpitations, butterflies in the stomach, irritability, hunger, or fatigue. Severe hypoglycemia can cause drowsiness, poor concentration, confusion, and even unconsciousness. Diabetes care experts generally recommend checking one’s blood sugar level whenever possible to confirm hypoglycemia before treating it.
To treat hypoglycemia, the standard advice is to consume 10-15 grams of “fast-acting” carbohydrate. Each of the following items provides roughly 10-15 grams of carbohydrate:
- 5-6 LifeSaver candies
- 4-6 ounces regular (non-diet) soda
- 4-6 ounces of orange juice
- 2 tablespoons of raisins
- 8 ounces of nonfat or low-fat milk
- One tube (0.68 ounces) of Cake Mate decorator gel.
There are also a number of commercially available glucose tablets and gels. Benefits to using commercial products include the following:
- They aren’t as tempting to snack on as candy is.
- They contain no fat, which can slow down digestion, or fructose, which has a smaller and slower effect on blood glucose.
- The commercial products are standardized, so it’s easy to measure out a dose of 10-15 grams of carbohydrate.
If someone is unconscious from low blood sugar, don’t attempt to give him anything to eat or drink. Rather, take him to the nearest emergency room, or inject glucagon if you have been instructed how to do it. If you can’t get emergency help fast enough and can’t inject glucagon, it may help to rub a little glucose gel between the person’s gums and cheek. | 3.024362 |
Micro vs Macro
Micro and macro are prefixes that are used before words to make them small or big respectively. This is true with micro and macroeconomics, micro and macro evolution, microorganism, micro lens and macro lens, micro finance and macro finance, and so on. The list of words that makes use of these prefixes is long and exhaustive. Many people confuse between micro and macro despite knowing that these prefixes signify small and large respectively. This article takes a closer look at the two prefixes to find out their differences.
To understand the difference between micro and macro, let us take up the example of micro and macro evolution. To signify evolution that takes place within a single species, the word microevolution is used whereas evolution that transcends the boundaries of species and takes place on a very large scale is termed as macroevolution. Though the principles of evolution such as genetics, mutation, natural selection, and migration remain the same across microevolution as well as macro evolution, this distinction between microevolution and macroevolution is a great way to explain this natural phenomenon.
Another field of study that makes use of micro and macro is economics. While the study of the overall economy and how it works is called macroeconomics, microeconomics focuses on the individual person, company, or industry. Thus, the study of GDP, employment, inflation etc. in an economy is classified under macroeconomics. Microeconomics is the study of forces of demand and supply inside a particular industry effecting the goods and services. So it is macroeconomics when economists choose to concentrate upon the state of the economy in a nation whereas the study of a single market or industry remains within the realms of microeconomics.
There is also the study of finance where these two prefixes are commonly used. Thus, we have microfinance where the focus is upon the monetary needs and requirements of a single individual where there is also macro finance where financing by the banks or other financial institutions is of very large nature.
Micro and macro are derived from Greek language where micro means small and macro refers to large. These prefixes are used in many fields of study such as finance, economics, evolution etc. where we have words like micro finance and macro finance, micro evolution and macro evolution etc. Studying something at a small level is micro while studying it on a large scale is macro analysis. Financing the needs of an individual may be micro financing whereas the financial needs of a builder requiring money for a very large infrastructural project may be referred to as macro finance. | 3.372654 |
Presenting - 'Amasia', The Next Supercontinent!
Ever since Earth has been in existence there have been the formation and breaking apart of many supercontinents - While Pangaea, that existed between 150-300 million years ago is the most well-known, prior to that was Nuna (1.8 billion years ago), Rodina (1 billion years ago) and many more that cannot be verified because 2 billion year-old rocks containing evidence of magnetic fields, are hard to find.
And while most scientists are in agreement that Rodina, Nuna and Pangaea did exist, there is very little consensus on the continents they comprised of - Some experts believe that they were the same ones, while others think that the wandering landmasses reassembled on the opposite sides each time - about 180° away from where the previous supercontinent had come together.
Now, a group of geologists led by Yale University graduate student Ross Mitchell have a new theory - They think that each supercontinent came together about 90° from its predecessor. That is, the geographic center of Rodina was about 88° away from the center of Nuna, whilst the center of Panagea, believed to have been located near modern-day Africa, was about 88° away from the center from its super giant predecessor, Rodina.
These calculations that were reported earlier this year were based not only on the paleolatitude (The latitude of a place at some time in the past, measured relative to the earth's magnetic poles in the same period) of the ancient supercontinents, but also, for the first time the paleolongitude, that Ross measured by estimating how the locations of the Earth's magnetic poles have changed through time.
While the theory is interesting, what is even more so is that the team has also come up with a model of the next supercontinent. If their estimates are accurate, over the next few hundred million years, the tectonic plates under the Americas and Asia will both drift northward and merge. This means that modern day North and South America will come together and become one giant landmass, displacing the Caribbean Sea completely. A similar movement in Eurasia (Australia and South Eastern Asia) will cause the Arctic Ocean to disappear causing the continents to fuse with Canada. The result? A ginormous continent that they call 'Amasia'. The one thing that is not too clear is if Antarctica will be part of this or just be left stranded.
While many researchers believe that the Yale team's theory is quite feasible, nobody will ever know for sure - Because unfortunately, none of us are going to be around few 100 million years from now - But it's sure fun to envision the new world, isn't it? | 4.296026 |
Between 35,000 and 45,000 years ago, Neanderthals in Europe and Asia were replaced by the first modern humans. Why and how this transition occurred remains somewhat controversial. New research from the journal Science suggests that sheer numbers may have played a large role in modern humans’ eventual takeover; archeological data shows that early populations of modern humans may have outnumbered Neanderthals by more than 9 to 1.
http://www.wired.com/wiredscience/2011/ ... -dynamics/
Humans, Neanderthals got it on
By Lily Boisson, CBC News
New genome shows Neanderthal trace in humans
A new study adds more evidence to the theory that humans and Neanderthals interbred thousands of years ago. The study found that many humans outside of Africa share DNA with the long-extinct species.
An international team of researchers has found that a small part of the human X chromosome, which originates from Neanderthals, is present in about nine per cent of individuals from outside of Africa.
http://www.cbc.ca/news/technology/story ... eding.html | 3.369405 |
Researchers at New Jersey Institute of Technology (NJIT) have developed an inexpensive solar cell that can be painted or printed on flexible plastic sheets.
“Someday, homeowners will even be able to print sheets of these solar cells with inexpensive home-based inkjet printers. Consumers can then slap the finished product on a wall, roof or billboard to create their own power stations,” said Somenath Mitra, Ph.D., lead researcher, professor and acting chair of NJIT’s Department of Chemistry and Environmental Sciences.
Harvesting energy directly from abundant solar radiation using solar cells is increasingly emerging as a major component of future global energy strategy, Mitra said. Yet, when it comes to harnessing renewable energy, challenges remain.
Expensive, large-scale infrastructures, such as windmills or dams, are necessary to drive renewable energy sources, such as wind or hydroelectric power plants. Purified silicon, also used for making computer chips, which continue to rise in demand, is a core material for fabricating conventional solar cells. However, the processing of a material such as purified silicon is beyond the reach of most consumers.
“Developing organic solar cells from polymers, however, is a cheap and potentially simpler alternative,” Mitra said. “We foresee a great deal of interest in our work because solar cells can be inexpensively printed or simply painted on exterior building walls and/or rooftops. Imagine some day driving in your hybrid car with a solar panel painted on the roof, which is producing electricity to drive the engine. The opportunities are endless.”
The solar cell developed at NJIT uses a carbon nanotubes complex, which is a molecular configuration of carbon in a cylindrical shape. Although estimated to be 50,000 times smaller than a human hair, just one nanotube can conduct current better than any conventional electrical wire.
Mitra and his research team took the carbon nanotubes and combined them with tiny carbon fullerenes (sometimes known as buckyballs) to form snake-like structures. Buckyballs trap electrons, although they can’t make electrons flow. Add sunlight to excite the polymers, and the buckyballs will grab the electrons. Nanotubes, behaving like copper wires, then will be able to make the electrons or current flow.
“Someday, I hope to see this process become an inexpensive energy alternative for households around the world,” Mitra said. EC | 3.87387 |
Food systems are often described as comprising four sets of activities: those involved in food production, processing and packaging, distribution and retail, and consumption. All encompass social, economic, political, and environmental processes and dimensions. To analyze the interactions between global environmental change and food systems, as well as the tradeoffs among food security and environmental goals, a food system can be more broadly conceived as including the determinants (or drivers) and outcomes of these activities. The determinants comprise the interactions between and within biogeophysical and human environments that determine how food system activities are performed. These activities lead to a number of outcomes, some of which contribute to food security and others that relate to the environment and other societal concerns. These outcomes are also affected directly by the determinants.
Food security is the principal policy objective of a food system. Food security outcomes are described in terms of three components and their subcomponents: food availability, i.e., production, distribution, and exchange; food access, i.e., affordability, allocation, and preference; and food use, i.e., nutritional and social values and safety. Although the food system activities have a large influence on food security outcomes, these outcomes are also determined directly by socio-political and environmental drivers. These outcomes vary by historical, political, and social context.
To capture these concepts holistically and to allow the analysis of impacts of global environmental change, adaptations, and feedbacks, a food system must include: | 3.557642 |
Contrary to popular belief cranberries do not grow in water. They grow in beds called 'bogs' made of impermeable layers of sand, peat, gravel, clay and organic decaying matter from the cranberry vines. The vines can only grow and survive when special conditions exist such as an acid peat soil, an adequate supply of fresh water for irrigation and periodic flooding, a supply of sand and a long growing season that extends from April to November. There are two main methods of harvesting cranberries - dry and wet harvesting.
EDEN Organic Dried Cranberries are a native American variety Vaccinium macrocarpon organically grown on family owned cranberry bogs in Québec, Canada. Ours are wet harvested, considered by some to be the best way to harvest cranberries. First our grower floods the bog with about 12 to 18 inches of water. Next, a simple machine called a 'water reel' stirs up the water and loosens the cranberries from their vines. The water reel is nicknamed the 'egg beater' and resembles a paddle boat. Cranberries have small air bubbles in the center, and once loosened from the vines they float to the surface of the flooded bog. Harvesters wade out into the bog when all the cranberries are on the surface. Using a specially designed gathering device they hand corral the berries into a large circle forming a thick red carpet of berries which are then loaded into trucks and taken to the processing station. Here the cranberries are cleaned, sorted, and quick frozen. When ready for drying, the cranberries are thawed and infused by immersing them in organic apple juice concentrate that is circulated over them until they reach just the right sweetness or 'Brix'. The infused cranberries are then rinsed, low heat dried, and coated very lightly with a mist of organic sunflower oil to prevent clumping. The low heat drying is warm air circulated until they are dry enough to become shelf stable, requiring no refrigeration.
Unlike most commercial dried fruit, EDEN Organic Dried Cranberries contain no added refined sugar or high fructose corn syrup. We use NO sulfites, chemical preservatives, or additives of any kind.
Cranberries are native to North America and were first used centuries ago by native Americans. A versatile fruit, they discovered that it could be used not only as a food source, but also as a dye for rugs, blankets and clothing, and as a healing plant to treat arrow wounds. American Indians had many names for the cranberry such as 'sasamanesh, ibimi, and atogua'. To the Delaware Indians it was a symbol of peace. Many native Americans believed that the berries had a special power that could calm the nerves. It's current name comes from early Dutch and German settlers who named the fruit, 'crane berry', because its small, pink blossoms resembled the head and bill of a Sandhill crane.
Although folklore and anecdotal accounts of cranberries healthful properties (especially the benefits to urinary tract health) have been touted for centuries, it is recently that scientific research began revealing how healthful cranberries can be. Packed with nutrients like antioxidants and other natural compounds, cranberries are a great choice for the health conscious. The USDA recently found that the high phenolic content in cranberries delivers a potent antioxidant punch, rating it one of the highest out of 20 common fruits rated. To determine the antioxidant activity of various foods, the USDA uses a system referred to as Oxygen Radical Absorbance Capacity (ORAC). By testing the ability of foods and other compounds to subdue oxygen free radicals, the USDA was able to determine each compound's antioxidant capability. The ORAC value of cranberries is 1,750. Cranberries recently became the first fruit to carry a certified health claim in France.
EDEN Organic Dried Cranberries are a delicious, healthy snack, but there's no need to limit them to mere snacking. Use EDEN Organic Cranberries in baking bread, in cakes and muffins, in pie fillings and puddings, in grain and bread stuffing, in hot cereals or on cold cereals. They can also used in making granola, granola bars, popcorn balls and caramel corn. | 3.138042 |
Effect on Instruction and Classroom Management
By thinking of assessment as part of instruction, teachers obtain
immediate instructional suggestions and make any adjustments that
are necessary. Teacher observation is a legitimate, necessary,
valuable source of assessment information. By asking children to
read aloud or to retell a portion of a selection they are reading,
the teacher receives immediate information about the level of challenge
that the selection presents to various students (Bembridge, 1992;
Classroom organization and management suggestions flow from ongoing
assessment data. Children who need added support, for example, may
be encouraged to work in cooperative groups. Students who are having
difficulty gain the support they need, and very able students gain
deeper understanding of the materials they are reading as they explain
the materials to others (Johnson & Johnson, 1992).
Go on to Portfolio Assessment
Back to Classroom Assessment
Reading/Language Arts Center |
Education Place |
Copyright © 1997 Houghton Mifflin Company. All Rights Reserved.
Terms and Conditions of Use. | 3.328348 |
A typical BPM uses a differential pressure sensor to measure cuff or arm pressure. As the output of this sensor lies within a few micro volts (30-50µV), the output pressure signal has to be amplified using a high-gain instrumentation amplifier with a good common mode rejection ratio (CMRR). Usually the gain and CMRR need to be around 150 and 100 dB respectively. The frequency of oscillatory pulses in the pressure signal lies between 0.3-11Hz with an amplitude of a few hundred microvolts. These oscillations are extracted using band-pass filters with gain around 200 and cutoff frequency at 0.3-11Hz. A 10-bit ADC with a speed of 50 Hz is used to digitize the pressure sensor and oscillatory signal. Two timers are used to calculate the heart rate and implement safety timer functionality. A safety timer regulates the pressure kept on a subject’s arm for a certain period of time. This safety timer is a part safety regulation in AAMI standards. A microcontroller core calculates the systolic and diastolic pressures values using an oscillometric algorithm. The cuff is inflated and deflated using motors driven by PWMs.
A typical non-contact digital thermometer uses a transducer, also called a thermopile, consisting of a micro machine embedded membrane with thermocouples to measure thermocouple temperature and a thermistor to measure ambient temperature. The thermocouple generates a DC voltage corresponding to the temperature difference in its junctions. The output of the thermocouple is on the order of a few µV. The signal from the thermocouple is amplified using a low-noise precision amplifier. A voltage divider is constructed with the thermistor and external precision voltage reference. This voltage divider converts the change in thermistor resistance with respect to temperature to change in voltage. Voltages from the thermocouple and thermistor are used to calculate the thermocouple and ambient temperatures. The temperature is obtained from voltages using a polynomial function given by the sensor manufacturer or through a look-up table with pre-stored readings. The ambient temperature is added to the thermocouple temperature to get the final temperature measurement.
A segment LCD driver, RTC, push buttons, EEPROM and USB are the other peripherals needed in both of the above applications.
The components which are external to microcontroller like the transducer, ADC, LCD driver/controller, USB controller, filter, and amplifiers are the peripheral components. These components interface to the microcontroller through either a GPIO or a dedicated pin. The more external components there are, the more limitations and constraints developers have to account for, such as managing the bill of materials, higher PCB complexity, achieving FDA certification for each and every component, increased design/development time, and reduced analog IP protection. | 3.702851 |
Topic 6: Writing and Typing Tips
Using the features in the AutoCorrect Options and teaching students the quickest way to spell check and find synonyms will enhance their writing. Create a template that includes the information that a student needs to correctly head a paper will reduce frustrations and allow the student to immediately begin work on the content.
- Participants will use the autocorrect, and auto text features to reduce the number of keystrokes and errors.
- Participants will use the right mouse button to quickly check their spelling and find synonyms of words.
- Participants will create a template that contains the necessary student information for a paper heading. Such as name, date, class name.
To begin the first Learning Activity, click here. | 3.549992 |
Skip to main content
More Search Options
A member of our team will call you back within one business day.
Teens on average need about 9 to 9.5 hours of sleep at night. But most don’t get the amount of sleep they need. School, friends, homework, activities, television, and the computer may all have a higher priority for a teen than sleep. Sleep deprivation can have serious consequences for a teen’s health and well-being. Here’s how to better understand your child’s sleep needs and what you can do to help.
Teens tend to stay up late and want to sleep late in the morning. This isn’t due to laziness or stubbornness. It is actually due to natural rhythms of the teen’s body. Body chemicals in teens work to make the teen naturally want to go to bed around midnight or later and wake up in the late morning. Early school start times conflict with these natural body rhythms. And pressures on a teen’s time after school keep him or her from going to bed early to compensate. The result is often a sleep-deprived teen.
The National Institutes of Health (NIH) reports that teens who don’t get enough sleep have trouble focusing in class and often have lower grades than they are capable of. The NIH has also found growing evidence linking a chronic lack of sleep in teens with an increased risk of being overweight, developing diabetes or heart disease, and getting infections. Teens who are sleep deprived may fall asleep in class or other inappropriate places. And for teens who are driving, being sleepy can raise the risk of a serious accident.
Is your teen sleep deprived? Watch for the following signs:
Trouble concentrating or remembering
Need for caffeine or other stimulants to stay awake
Need for naps after school
Trouble sleeping (problems falling asleep or staying asleep)
Tips to help your child get more sleep and be more alert during the day:
Encourage your teen to get a full night’s sleep on a regular basis. Try to set a regular bedtime. Help your teen avoid staying up late to do homework or study. If extracurricular activities after school are too time-consuming, consider cutting back.
Have your teen get up at the same time every morning. Discourage sleeping in on weekends to “catch up on sleep.” This does more harm than good by throwing sleep rhythms off.
Limit caffeine intake. Don’t let your child have caffeine after lunchtime.
Discourage doing anything in bed other than sleeping, such as reading, writing, eating, watching TV, talking on the phone, or playing videos or other games.
Restrict TV and computer use (which can be stimulating) for at least an hour before bedtime. Instead, encourage reading, listening to quiet music, writing in a journal, or other calming activity during this time.
Give your teen a warm, non-caffeinated beverage (such as milk) before bed.
Make the bedroom conducive to sleep. Take the TV, computer, and phone out of the bedroom. Make sure the bedroom is cool and as dark and quiet as possible.
Turn a bright light on in the child’s room in the morning. The bright light helps the body wake up and shuts down production of sleep hormones. Alarm clocks with a light feature are available on the Internet.
The following can be signs of a more serious problem that can be treated. Let the child’s doctor know if your child:
Falls asleep during the day
Has leg twitching or moving when trying to fall asleep, or extremely restless sleep
Has insomnia (trouble falling asleep or staying asleep) often | 3.475968 |
Satellites are tracing Europe's forest fire scars
Burning with a core heat approaching 800°C and spreading at up to 100 metres per minute, woodland blazes bring swift, destructive change to landscapes: the resulting devastation can be seen from space. An ESA-backed service to monitor European forest fire damage will help highlight areas most at risk of future outbreaks.
Last year's long hot summer was a bumper year for forest fires, with more than half a million hectares of woodland destroyed across Mediterranean Europe. So far this year fresh fires have occurred across Portugal, Spain and southern France, with 2500 people evacuated from blazes in foothills north of Marseille.
According to the European Commission, each hectare of forest lost to fire costs Europe's economy between a thousand and 5000 Euros.
The distinctive 'burn scars' left across the land by forest fires can be identified from space as a specific reddish-brown spectral signature from a false-colour composite of spectral bands from optical sensors in the short wavelength infrared, near infrared and visible channels.
A new ESA-backed, Earth Observation-based service is making use of this fact, employing satellite imagery from SPOT and Landsat to automatically detect the 2004 burn scars within fire-prone areas of the Entente region of Southwest France, within the Puglia and Marche regions of Italy and across the full territory of Spain.
Burn scar detection is planned to take place on a seasonal basis, identifying fires covering at least one hectare to a standard resolution of 30 metres, with detailed damage assessment available to a maximum resolution of 2.5 metres using the SPOT 5 satellite.
Partner users include Italy's National Civil Protection Department, Spain's Dirección general para la Biodiversidad – a directorate of the Environment Ministry that supports regional fire-fighting activities with more than 50 aircraft operating from 33 airbases – as well as France's National Department of Civil Protection (DDSC) and the country's Centre D'Essais Et De Recherce de l'Entente (CEREN), the test and research centre of the government organisation tasked with combating forest fires, known as the Entente Interdépartementale.
"To cope with fire disasters, the most affected Departments in the south of France have decided to join forces to ensure effective forest fire protection," explained Nicolas Raffalli of CEREN. "Within the Entente region we have an existing fire database called PROMETHEE, which is filled out either by firemen, forestry workers or policemen across the 13 Departments making up the region."
Current methods of recording fire damage vary greatly by country or region. The purpose of this new service – part of a portfolio of Earth Observation services known as Risk-EOS – is to develop a standardised burn scar mapping methodology for use throughout Europe, along with enabling more accurate post-fire damage assessment and analysis of vegetation re-growth and manmade changes within affected areas.
"We want to link up PROMETHEE with this burn scar mapping product from Risk-EOS to have a good historical basis of information," Raffalli added. "The benefit is that it makes possible a much more effective protection of the forest."
Characterising the sites of past fires to a more thorough level of detail should mean that service users can better forecast where fires are most likely to break out in future, a process known as risk mapping.
Having been validated and geo-referenced, burn scar maps can then be easily merged with other relevant geographical detail. The vast majority of fires are started by the actions of human beings, from discarding cigarette butts up to deliberate arson. Checking burn scar occurrences against roads, settlements and off-road tracks is likely to throw up correlations.
These can be extrapolated elsewhere to help identify additional areas at risk where preventative measures should be prioritised. And overlaying burn scar maps with a chart of forest biomass has the potential to highlight zones where new blazes would burn the fiercest. Once such relatively fixed environmental elements, known as static risks, are factored in, other aspects that change across time – including temperature, rainfall and vegetation moisture – can be addressed. These variables are known as dynamic risks. At the end of the risk mapping process, the probability of fire breaking out in a particular place and time can be reliably calculated.
The Risk-EOS burn scar mapping service began last year. The intention is to develop further fire-related services by the end of 2007, including daily risk maps combining EO with meteorological and vegetation data.
Another planned service will identify 'hot spots' during fires, and map fire events twice a day, permitting an overall assessment of its development and the damage being done. A 'fires memory atlas' set up at national or regional level will allow the routine sharing of all information related to forest fire events and fire risk.
"For the future I think near-real time fire and hot spot mapping would obviously be extremely useful," Raffalli concluded. "With these products those managing the situation could see where the fire is, as well as the hot spots inside it. They can then deploy ground and aerial resources with maximum efficiency."
Building on ITALSCAR
Italy's National Civil Protection Department is providing advice on the implementation of the Risk-EOS service, based on previous experience with an ESA Data User Programme (DUP) project called ITALSCAR.
Run for ESA by the Italian firms Telespazio una Societá Finmeccanica and Vitrociset, ITALSCAR charted burn scars across the whole of Italian territory occurring between June and September during the years 1997, 1998, 1999 and 2000.
For the last quarter of a century, Italian legislation had required that all burned areas be recorded and mapped, as no land use change is permitted to occur on such terrain for 15 years after a blaze, no new building construction for the next ten years, and no new publicly funded reforestation for a half-decade.
However the mapping of burn scars is the responsibility of local administration and their methodologies and overall effectiveness are highly variable. No central cartographic archive of burn scar perimeters exists: the closest equivalent is a cardset index (Anti Incendio Boschivi or AIB) recording fire-fighting interventions by the Italian Forest Guards.
The ITALSCAR burn scar maps were produced across a wide variety of different forest classes. Burn scars were mapped pixel by pixel using an automated software system, followed up with manual photo-interpretation for quality assurance. To ensure confidence in the results they were validated using ground surveys and checked against reports from local fire brigades and Forest Guards' AIB records.
The Risk-EOS burn scar mapping service is based around this same methodology.
Managed by Astrium, Risk-EOS also incorporates services for flood as well as fire risk management. It forms part of the Services Element of Global Monitoring for Environment and Security (GMES), an initiative supported jointly by ESA and the European Commission and intended to establish an independent European capability for worldwide environmental monitoring on an operational basis. | 3.64307 |
This little LED-lit cube is much more than just a paper lantern: It’s a translucent and flexible thin-film electronic circuit that hooks up a battery to an LED, limber enough to be folded into an origami box. And the coolest thing about circuits like these? You can make them at home.
In what follows, we combine basic electronics (an LED Throwie) and papercraft (a traditional origami balloon) to make what might be called an “LED Foldie.” The circuitry consists of aluminum foil traces, ironed onto adhesive paper such as freezer paper, photo mounting paper, or even a laser printed pattern. Something constructed this way can then be folded so fit an LED and battery to complete the circuit.
The first step in designing a three-dimensional circuit like this is to see where the parts go. After that we will unfold the model, draw circuit paths between the points that we want to connect, and go from there.
To get started, we first folded an origami balloon, and then inserted the components where we wanted them. The balloon has a convenient pocket on the side for a lithium coin cell, and a single hole that allows you to point an LED into the interior of the balloon. (And you can follow along with balloon folding in this flickr photo set.)
We marked up the locations of the battery and LED terminals on the origami balloon– while still folded– and then unfolded our “circuit board.” At this point, we have the component locations marked, but no lines drawn between them.
The next step is to add those circuitry lines (circuit board wires, or traces) between the battery and LED. One thing to keep in mind for interfacing papercraft to electronics: it’s helpful if the circuit traces fold over the leads for the LED in order to maintain good contact.
After connecting the dots (so to speak) we have the resulting layout of our circuit. (See PDF below as well.) Pretty simple here– only two wires! The two round pads contact the two sides of the battery, and the two angled pads contact the two leads of the LED.
The next step is to actually fabricate our circuit board. We’ve actually found two slightly different techniques that work well, so we’ll show you both. First is the “Freezer paper” method (which also works with sheets of dry mount adhesive), where you laminate foil traces to the plastic-coated paper. Second is the “Direct Toner” method, where you print out a circuit diagram on a laser printer and laminate the foil to the printed toner.
(Both of these methods of fabricating paper circuitry can be applied in all kinds of other arenas besides origami. Our origami balloon example provides a good demonstration of the techniques!)
METHOD I: The “Freezer paper” method
Next, cut out your traced pattern. Scissors work well, of course. Be careful not to tear the foil!
Prefolding your paper and comparing to your circuit layout will show you where to lay the aluminum foil pieces out on your paper. Then, use an iron to laminate the foil to the paper.
What kind of paper? The easiest (but slightly obscure) choice is “dry mount adhesive,” which is tissue paper infused with high-quality hot-melt glue. You can get sheets or rolls of it from art supply places for use in mounting artwork and photography. Much more common and equally workable is freezer paper. Freezer paper is a common plastic-coated paper that you can get on rolls at the grocery store– look in the section with the aluminum foil. (Place foil on the shiny side of the freezer paper).
We used a small hobby iron to fuse the foil to our different papers, but a regular iron works just as well. The dry mount adhesive did not require much heat, while the freezer paper needed the iron to be on high– that plastic has to melt. We folded a larger sheet of parchment paper over the whole circuit during ironing in order to keep the adhesives from sticking to the iron and other surfaces.
We also experimented with waxed paper, which was not sticky enough for the aluminum foil. We even tried ironing copper leaf onto waxed paper, and though it adhered well, it was too fragile and the traces broke upon folding. It would probably work reasonably well in an application where folding isn’t required: It was absolutely beautiful and completely unreliable for origami.
Once the foil is adhered to the paper, it is time to refold it.
Insert the components, and it lights up.
If it doesn’t light up, try turning your battery around. If it still doesn’t light up, make sure your LED leads are contacting the traces.
Hint for this circuit: You won’t hurt the LED by plugging it in backwards to that little battery, so this is a better method than actually trying to keep track of the polarity.
The LED Foldie naturally wants to sit on the heaviest part, the battery, with the LED projecting into the side of the balloon. The weight of the battery helps keep the circuit connected.
METHOD II: The “Direct Toner” method
Our last breakthrough came when we created a pdf pattern to print out. We realized that you could fuse the foil directly to the toner from a laser printer. You can print out the pattern (laser printers only: no inkjet!) and iron your foil pieces directly to the paper.
Caveat: while the foil sticks well to the toner, it isn’t quite strong enough that you can just iron on a giant sheet of foil and have it only stick where there’s toner, so you still need to cut out the foil shapes, at least roughly.
Place your foil carefully over the pattern, and iron very well, very hot. Be sure to cover your work with parchment paper or you will get toner on your iron.
When your foil is stuck to the toner, cut out the square and get ready to fold.
Inflate, add battery and LED, and admire the glow. As before, if you have trouble, try turning your battery around and making sure that the leads of the LED are making contact with the foil.
And there it is: a bridge between papercraft and electronics, or perhaps between etch-at-home printed circuit boards and high-end flex PCBs. We think that there’s some potential here.
Your turn! What kinds of origami can you light up? As always, we’d love to see your project pictures in the Evil Mad Science Auxiliary. | 3.315408 |
The Evolution Deceit
Imaginary Dinosaur-Bird Links
As you saw in earlier chapters, it's impossible for birds to have evolved from dinosaurs, since no mechanism can have eliminated the enormous physiological differences between the two groups. Despite this, evolutionists still raise the scenario of birds being evolved from dinosaurs in various ways. They frequently resort to news reports, using pictures of reconstructions and sensational headlines regarding these so-called dino-birds, as if they represented the true facts. These accounts are intended to convince people feathered dinosaurs once lived on Earth.
This scenario is presented persistently as it were a proven fact. All objections, criticisms and counter-evidence are totally ignored, clearly indicating that this is deliberate propaganda intended to impose dino-bird myths on society. The biased fossil interpretations we shall examine in the following pages reveal their hollow, deceptive nature.
The claim that birds evolved from dinosaurs is actually opposed by a great many paleontologists or anatomists who otherwise support the theory of evolution. As you have seen, two renowned ornithologists, Alan Feduccia and Larry Martin, think this scenario is completely erroneous. This is set out in the textbook Developmental Biology, taught in U.S. universities:
Not all biologists believe that birds are dinosaurs... This group of scientists emphasize the differences between dinosaurs and birds, claiming that the differences are too great for the birds to have evolved from earlier dinosaurs. Alan Feduccia, and Larry Martin, for instance, contend that birds could not have evolved from any known group of dinosaurs. They argue against some of the most important cladistic data and support their claim from developmental biology and biomechanics. 170
Many evolutionist publications refer to the thesis that birds evolved from dinosaurs as if it were based on solid evidence and accepted by the entire scientific community. They try to give the impression that the only subject up for debate is which species of dinosaur birds evolved from. Although Martin earlier supported the dino-bird claim, he eventually realized in the light of his research that it was invalid, and abandoned his former ideas:
Every time I look at the evidence formerly discovered and then make a claim about the origins of the theropod, I saw its inaccuracy. That is because everything shows its inadequacy. The truth of the matter is that…I seriously suspect that they have the same features with birds and don't think that there exist striking features supporting that birds are of theropod origin. 171
Feduccia admits that concerning the origin of birds, the theory of evolution finds itself in a state of uncertainty. He attaches no credence to the deliberately maintained dino-bird controversy, which is in fact groundless. Important information is contained in his article, "Birds Are Dinosaurs: Simple Answer to a Complex Problem," published in October 2002 in The Auk, the journal of the American Ornithologists' Union, in which the most technical aspects of ornithology are discussed. Feduccia describes in detail how the idea that birds evolved from dinosaurs, raised by John Ostrom in the 1970s and fiercely defended ever since, lacks any scientific evidence, and how such an evolution is impossible.
Feduccia is not alone among evolutionists in this regard. Peter Dodson, the evolutionist professor of anatomy from Pennsylvania University, also doubts that birds evolved from theropod dinosaurs:
I am on record as opposing cladistics and catastrophic extinction of dinosaurs; I am tepid on endothermic dinosaurs; I am skeptical about the theropod ancestry of birds. 172
Despite being an evolutionist, Dodson admits the unrealistic claims of the theory of evolution, and has come in for severe criticism from his evolutionist colleagues. In one article, he responds to these criticisms:
Personally, I continue to find it problematic that the most birdlike maniraptoran theropods are found 25 to 75 million years after the origin of birds . . . .Ghost lineages are frankly a contrived solution, a deus ex machina required by the cladistic method. Of course, it is admitted that late Cretaceous maniraptorans are not the actual ancestors of birds, only "sister taxa." Are we being asked to believe that a group of highly derived, rapidly evolving maniraptorans in the Jurassic gave rise to birds, as manifested by Archaeopteryx, and then this highly progressive lineage then went into a state of evolutionary stasis and persisted unchanged in essential characters for millions of years? Or are actual ancestors far more basal in morphology and harder to classify? If the latter, then why insist that the problem is now solved? 173
Alan Feduccia sets out an important fact concerning the dino-birds said to have been found in China: the "feathers" on the fossils said to be those of feathered dinosaurs are definitely not bird feathers. A considerable body of evidence shows that these fossil traces have nothing at all to do with bird feathers. He says this in an article published in The Auk magazine:
Having studied most of the specimens said to sport protofeathers, I, and many others, do not find any credible evidence that those structures represent protofeathers. Many Chinese fossils have that strange halo of what has become known as dino-fuzz, but although that material has been "homologized" with avian feathers, the arguments are far less than convincing. 174
Citing Richard O. Prum, one of the supporters of the dino-bird claims, as an example, Feduccia goes on to mention the prejudiced approach so prevalent on the subject:
Prum's view is shared by many paleontologists: birds are dinosaurs; therefore, any filamentous material preserved in dromaeosaurs must represent protofeathers. 175
Latest Research Has Dealt a Severe Blow to Feathered Dinosaur Claims
The fossilized structures referred to as dinosaur feathers were shown by Theagarten (Solly) Lingham-Soliar, a paleontologist from Durban-Westville University in South Africa to be nothing more than decayed connective tissue. Professor Lingham-Soliar performed an experiment by burying a dolphin in river mud, semi-permeable to air for a year. The reason a dolphin was selected was that its flesh is easy to analyze. At the end of this period, the professor examined the dolphin's bunches of collagen—which constitutes connective tissue in the bodies of most living things— under a microscope. According to him, the decayed collagen in the dolphin's body bore "a striking resemblance to feathers."1 The German magazine Naturwissenschaften commented that: "The findings throw serious doubt on the virtually complete reliance on visual image by supporters of the feathered dinosaur thesis and emphasize the need for more rigorous methods of identification using modern feathers as a frame of reference." 2 With this finding, it emerged that even a dolphin could leave behind traces of apparent feathers. This once again showed that there are no grounds for regarding extinct dinosaurs with "feathers" as proto-birds.
1. Stephen Strauss, "Buried dolphin corpse serves science," 11 November 2003; http://www.theglobeandmail.com/servlet/ArticleNews/TPStory/LAC/20031111/UDINO11/TPScience/
According to Feduccia, one factor that invalidates this preconception is the presence of these same traces in fossils that have no relationship with birds:
Most important, "dino-fuzz" is now being discovered in a number of taxa, some unpublished, but particularly in a Chinese pterosaur and a therizinosaur, which has teeth like those of prosauropods. Most surprisingly, skin fibers very closely resembling dino-fuzz have been discovered in a Jurassic ichthyosaur and described in detail. Some of those branched fibers are exceptionally close in morphology to the so-called branched protofeathers ("Prum Protofeathers"") described by Xu. That these so-called protofeathers have a widespread distribution in archosaurs is evidence alone that they have nothing to do with feathers. 176
Feduccia recalls that various structures found around these fossils and thought to belong to them, were later determined to consist of inorganic matter:
One is reminded of the famous fernlike markings on the Solnhofen fossils known as dendrites. Despite their plantlike outlines, these features are now known to be inorganic structures caused by a solution of manganese from within the beds that reprecipitated as oxides along cracks or along bones of fossils. 177
The fossil beds preserve not only an indefinite structure such as dino-fuzz but also bird feathers. But all the fossils presented as feathered dinosaurs have been found in China. Why should these fossils have not emerged from anywhere else in the world—Feduccia draws attention to this intriguing state of affairs:
One must explain also why all theropods and other dinosaurs discovered in other deposits where integument is preserved exhibit no dino-fuzz, but true reptilian skin, devoid of any featherlike material (Feduccia 1999), and why typically Chinese dromaeosaurs preserving dino-fuzz do not normally preserve feathers, when a hardened rachis, if present, would be more easily preserved. 178
Feduccia states that some of these creatures portrayed as feathered dinosaurs are simply extinct reptiles with dino-fuzz and that others are genuine birds:
There are clearly two different taphonomic phenomena in the early Cretaceous lacustrine deposits of the Yixian and Jiufotang formations of China, one preserving dino-fuzz filaments, as in the first discovered, so-called "feathered dinosaur" Sinosauropteryx (a compsognathid), and one preserving actual avian feathers, as in the feathered dinosaurs that were featured on the cover of Nature, but which turned out to be secondarily flightless birds. 179
Peter Dodson, on the other hand, says, "I hasten to add that none of the known small theropods, including Deinonychus, Dromaeosaurus, Velociraptor, Unenlagia, nor Sinosauropteryx, Protarcheaeopteryx, nor Caudipteryx is itself relevant to the origin of birds."180 He means that these creatures cannot be the ancestors of birds because the earliest known bird, Archaeopteryx, lived long before the Cretaceous Period.
In short, the fossils portrayed as feathered dinosaurs or dino-birds either belong to certain flightless birds like today's ostriches, or else to reptiles possessed of a structure known as dino-fuzz which has nothing to do with actual feathers. There exists not a single fossil that might represent an intermediate form between birds and reptiles. Therefore, the claim that fossils prove that birds descended from dinosaurs is completely unrealistic.
1) The Alleged Intermediate From: Mononychus
Mononychus is one of the fossils used as a vehicle for evolutionist propaganda and depicted with feathers in the 26 April 1993 edition of Time magazine. It was later realized, on the basis of further evidence, that this creature was not a bird.
One of the best-known fossils in the alleged dino-bird chain is Mononychus, discovered in Mongolia in 1993 and claimed to be an intermediate form between dinosaurs and birds. Although not the slightest trace of feathers was found in this fossil, Time magazine reconstructed the creature with feathers on the cover of its 26 April, 1993 issue. Subsequent evidence revealed that Mononychus was no bird but a fossorial (digging) theropod.
The fact that this fossil had a bird-like breastbone and wrist bones led evolutionists to interpret Mononychus as an intermediate form. Biased interpretations and support from the media gave the impression that some proof existed to back this up. However, the anatomical features depicted as evidence are also found in other animals, such as moles. These inferences represent no evidence at all and they have only led to misinterpretations.
Writing to Science News, Richard Monastersky reports, based on observations, why this fossil cannot be classified;
Mongolian and U.S. researchers have found a 75-million-year-old bird-like creature with a hand so strange it has left paleontologists grasping for an explanation. . . Paul Sereno of the University of Chicago notes that Mononychus had arms built much like those of digging animals. Because moles and other diggers have keeled sternums and wrists reminiscent of birds, the classification of Mononychus becomes difficult.181
In addition, this fossil is at least 80 million years younger than Archaeopteryx—which totally undermines any proposed
2) Bambiraptor Feinbergi, Depicted with Imaginary Feathers
Evolutionist media immediately give certain bird-like features to biased interpretations. The fossil Bambiraptor feinbergi, claimed to be an intermediate form between dinosaurs and birds, was depicted as a feathered reptile in media illustrations. However, there is no evidence that this creature ever had feathers.
In 1994, another dino-bird claim was made on behalf of a fossil called Bambiraptor feinbergi, estimated to be 75 million years old. Found in the Glacier National Park in northern Montana, the fossil is 95% complete. Evolutionists promptly claimed that it represents an intermediate form between dinosaurs and birds. When the fossil, belonging to a dinosaur, was introduced as an alleged dino-bird, the report admitted, "Feathers, however, have not yet been found."182 Despite this reservation, the media drew the animal as a feathered creature, and the missing details were added using plenty of creative imagination.
The most evident objection to this so-called missing link is again, an error in dating. This alleged intermediate form fossil is 75 million years younger than Archaeopteryx, itself a species of flying bird. This fossil is therefore a specimen that demolished the ancestral relationship claimed by evolutionists. In the same way that this fossil provides no evidence for evolution, it also demolished the ancestral relationship claimed by evolutionists. According to Ohio University professor of zoology John Ruben:
A point that too many people always ignored, however, is that the most birdlike of the dinosaurs, such as Bambiraptor and Velociraptor, lived 70 million years after the earliest bird, Archaeopteryx. So you have birds flying before the evolution of the first birdlike dinosaurs. We now question very strongly whether there were any feathered dinosaurs at all. What have been called feathered dinosaurs were probably flightless birds. 183
Evolutionists use a few bird-like characteristics as grounds for their preconceived interpretations. Yet the effort of building a line of descent based on similarities is full of contradictions that evolutionists cannot explain. Whenever evolutionists construct an alleged evolutionary relationship between clearly different living things based on similar structures, they immediately close the subject by describing it as "parallel evolution." They claim that living things with similar complex organs but with no ancestors in common, evolved independently. However, since they cannot account for the origin of these complex organs in even one living thing, their statements that these organs supposedly evolved several times presents a serious predicament.
Alan Feduccia states that certain similarities between birds and dinosaurs do not show any evolutionary relationship between the two groups:
Bambiraptor is a small dinosaur, but it does have a number of birdlike features, as do many other forms. However there is nothing special about hollow bones, as some mammals and frogs have them. The problem, of course, is that Bambiraptor is some 80 million years beyond Archaeopteryx, and yet is claimed to be the dinosaur most close to bird ancestry. That alone should be a red flag, and a warning that the situation is far more complicated than suspected. 184
3) Confuciusornis Sanctus: Identical to Modern Birds
Two paleontologists, Lianhai Hou and Zhonghe Zhou, researching at the Vertebrate Paleontology Institute in China in 1995, discovered a new species of fossilized bird, which they named Confuciusornis sanctus. This was presented to the public as the earliest flying dinosaur, even as evidence for how hands used for grasping turned into hands used for flight. According to Alan Feduccia, however, this fossil is one of the frequently encountered beaked birds. This one had no teeth, and its beak and feathers share the same features as present-day birds. There are claws on its wings, as with Archaeopteryx, and its skeletal structure is identical to those of modern-day birds. A structure known as the pygostyle, which supports the tail feathers, can also be seen.
In short, evolutionists regarded this fossil as a semi-reptile, the earliest ancestor of all birds, of a similar age (about 142 million years) as Archaeopteryx and, bearing a close resemblance to present-day birds. This clearly conflicts with the evolutionist theses that Archaeopteryx is the earliest ancestor of all birds. 185
This is also definitive proof that Archaeopteryx and other archaic birds are not intermediate forms. These and similar fossils show no evidence that different bird species evolved from earlier ones. On the contrary, it proves that present-day birds and certain unique bird species similar to Archaeopteryx lived at the same time. Some of these species, such as Confuciusornis and Archaeopteryx, are extinct, but a few have survived to the present day.
What is in the heavens and in the Earth belongs to Allah. Allah encompasses all things. (Surat an-Nisa, 126)
4) Protarchaeopteryx Robusta and Caudipteryx Zoui: Vehicles for Biased Interpretations
Caudipteryx zoui , Protarchæopteryx robusta
The fossils Protarchæopteryx robusta and Caudipteryx zoui do not belong to dinosaurs, but to extinct flightless birds. The efforts to portray these creatures as dinosaurs is an example of evolutionists' eagerness to produce evidence.
In the summer of 1996, farmers working in the Yixian Formation found three separate turkey-sized fossils, so well preserved as to give genuine evidence of bird feathers. At first, Ji Qiang and his colleague Ji Shu-An concluded that these fossils must belong to a single species. Noting their surprising similarity to Archaeopteryx, they gave the creature the name Protarchaeopteryx robusta.
During his research in the autumn of 1997, Philip Currie concluded that these fossils belonged to two different species, neither of which resembled Archaeopteryx. The second species was given the name Caudipteryx zoui. 186
The discoveries of the Protarchæopteryx robusta and Caudipteryx zoui fossils were depicted as evidence that birds evolved from theropod dinosaurs. 187 The popular press stated that these fossils were definitely the so-called ancestors of birds. One commentator even wrote that the dinosaur-bird link was "now pretty close to rock solid."188 However, this certainty was again, only a biased interpretation.
According to evolutionist claims, Caudipteryx and Protarchaeopteryx were small dinosaurs whose bodies were largely covered in feathers. But on their wings and tails were longer and more complex feathers, arranged like those in present-day birds. However, it is no surprise that these creatures should have feather arrangements similar to modern birds', because their feathers are symmetrically shaped, as observed in present-day flightless birds.189 Therefore, the creatures in question are flightless birds, not dinosaurs.
In severely criticizing the dino-bird dogma, Larry Martin and Alan Feduccia stated that these fossils were flightless bird species like the modern ostrich. 190
But adherents of the dino-bird theory are reluctant to accept this because they want to classify the creatures as dinosaurs, even though this fossil provides no support for evolutionist claims. Indeed, this fossil represents a new contradiction to evolutionists' alleged ancestral relationships.
According to the evolutionist scenario, these dinosaurs and modern birds both have a special bone that lets them bend their wrists. Again according to evolutionist claims, this feature enabled them to move their forefeet in a wide manner, to catch fleeing prey with their long arms and gripping talons. This allegedly powerful beating movement represented an important part of the wingbeats the today's birds use to fly. However, such interpretations are scientifically invalid, because flight consists of far more complex actions than just wing beating:
Any forward beating movement gives rises to a counter impulse that propels the bird backward. For the purpose of flight, the main flight feathers are arranged at such an angle as to push the air back and propel the birds forwards. As in planes, the wings have a special aerofoil shape, which causes air to flow faster over the upper surface than the lower. This, according to the Bernoulli principle, reduces air pressure on the upper surface and creates lift. This is the main factor in take-off, but there is also the question of Newton's Third Law—the reaction to the air being propelled downward.). 191
While refuting the theory of evolution's dino-bird claims, the world of science also confirms that living things are perfectly created. The attitude of evolutionist scientists clearly reveals that they are blindly devoted to the theory in question.
In addition, the structure of a wing hypothesized to catch prey is very different from that created for beating in flight. A feathered wing is no advantage to a bird using its wings to catch prey, because a feathered wing's broad surface will only increase air resistance and make movement more difficult. If, the bird flapped for hunting, as evolutionists maintain then its wing structure should help the bird move forward by pushing air back. Therefore, it would be a greater advantage for the bird's wings to let air pass through them, like a sieve or flyswatter. Thus evolutionist accounts are full of illogicalities that conflict with their own claims.
In addition to its feathers, Caudipteryx has a series of other features showing it to be a bird—such as that it was carnivorous. Caudopteryx was portrayed as a theropod since it was first unearthed, it was thought to be a carnivore.192 But there were no teeth in its lower skull and lower jaw, and the first two fossil specimens contained the remains of crops that birds use for digesting plant materials.193 Organs such as the crop are found only in birds and not in any species of the theropod family. 194
Protarchæopteryx and Caudipteryx are therefore extinct birds. The only reason they are referred to as dinosaurs is because that's what evolutionists want them to be.
5) Sinosauropteryx: Another Fossil Subjected to Speculative Claims
Today's evolutionists have entirely abandoned their claim that the creature was feathered. But a dogmatic approach towards evolution and accepted preconceptions make such errors inevitable.
With every new fossil discovery, evolutionists speculate about the dinosaur-bird link. Every time, however, their claims are refuted as a result of detailed analyses.
One example of such dino-bird claims was Sinosauropteryx, announced with enormous media propaganda in 1996. Some evolutionist paleontologists maintained that this fossil reptile possessed bird feathers. The following year, however, examinations revealed that these structures so excitedly described as feathers were actually nothing of the sort.
One article published in Science magazine, "Plucking the Feathered Dinosaur," stated that the structures had been misperceived as feathers by evolutionist paleontologists:
Exactly 1 year ago, paleontologists were abuzz about photos of a so-called "feathered dinosaur" . . . The Sinosauropteryx specimen from the Yixian Formation in China made the front page of The New York Times, and was viewed by some as confirming the dinosaurian origins of birds. But at this year's vertebrate paleontology meeting in Chicago late last month, the verdict was a bit different: The structures are not modern feathers, say the roughly half-dozen Western paleontologists who have seen the specimens. . . . Larry Martin of Kansas University, Lawrence, thinks the structures are frayed collagenous fibers beneath the skin—and so have nothing to do with birds. 195
About the speculative claims regarding feathers and Sinosauropteryx, Alan Brush of Connecticut University had this to say:
The stiff, bristlelike fibers that outline the fossils lack the detailed organization seen in modern feathers. 196
Another important point is that Sinosauropteryx had bellows-like lungs, like those in reptiles. According to many researchers, these show that the animal could not have evolved into modern-day birds with their high-performance lungs.
6) Eoalulavis Hoyasi Shares with Wing Structure of Modern-Day Birds
The wing structure in Eoalulavis hoyasi is also present in certain present-day flying birds. The feathers on this bird's wing contain a small bunch of feathers attached to the "finger". When the bird wishes to slow down or descend to earth, it decreases the angle of the wing to the horizon. This allows air to flow over the wing's top surface and to stop without falling.
Another fossil to demolish evolutionist claims was Eoalulavis hoyasi. This, estimated at some 120 million years old, is older than all the known theropod specimens. Nonetheless, wing structure in Eoalulavis hoyasi is identical to some modern-day flying birds. This proves that vertebrates identical in many respects to modern birds were flying 120 million years ago.197 Any suggestion that theropods, which appeared after this creature, were the ancestors of birds is clearly irrational.
This bird's wing has a bunch of small feathers attached to the "finger." Recognizable as the alula, this structure is a basic feature of many birds alive today and consisting of several feathers that permits the bird to engage in various maneuvers during flight. But it had never before been encountered in a fossil bird from the Mesozoic. This new bird was given the name Eoalulavis hoyasi, or "ancient bird with an alula."198 Its presence shows that this bird, the size of a chaffinch, was able to fly and maneuver as well as modern-day birds.
The alula functions like the wing flap on an airplane. When the bird wants to reduce its speed or landing, it increases of its wing to the horizon. The drag produced by this wing position helps the bird to slow down. But when the angle between the direction of the air flow and the wing surface gets too steep, turbulence over the wing increases until the bird loses the lift necessary to maintain flight. Like an airplane under similar circumstances, the bird is in danger of stalling in midair. The alula now enters the equation. By raising this small appendage, the bird creates a slot between it and the main part of the wing, similar to what happens when a pilot deploys a craft's wing flaps. The slot allows air to stream over the main wing's upper surface, easing turbulence and allowing the bird (or plane) to brake without stalling. 199
Birds 120 million years ago were using the same technology as that employed present. This realization added yet another insuperable difficulty facing the theory of evolution.
7) Unenlagia Comahuensis: A Dino-Bird Based On Artists' Imaginations
Fernando E. Novas of the Argentine Museum of Natural Sciences in Buenos Aires and Pablo F. Puerta of the Paleontology Museum in Trelew announced a new fossil, said to be 90 million years old, in the 22 May, 1997, edition of Nature magazine, under the caption "Missing Link."200 They named this fossil Unenlagia comahuensis, meaning "half-bird from north-west Patagonia." This fossil, discovered in Argentina's Patagonia region, consisted of more than 20 pieces of the creatures leg, rib and shoulder bones. Based on these fragments, artists drew a creature complete with a neck, jaw and tail—and subsequently announced that this fossil was an intermediate stage in the transition from dinosaurs to birds.
However, Unenlagia comahuensis is manifestly a dinosaur, in many respects. In particular, certain features of its skull and the bone formations behind its eyes closely resemble those of theropods. There is also no evidence at all that it bore feathers. Evolutionist scientists, however, claimed that by raising its forearms, it could make similar movements to those used by birds for flying. But clearly, these prejudiced guesses and assumptions cannot be regarded as definitive proof.
On account of its different features, Lawrence M. Witmer of Ohio University describes this creature as a genuine "mosaic". 201
Alan Feduccia also states that Unenlagia comahuensis cannot be a missing link between dinosaurs and birds, emphasizing that it lived 55 million years after Archaeopteryx. 202
As Feduccia stressed in a 1996 article written together with several other authors in Science magazine, almost every dinosaur said to resemble the bird dates back to long after the emergence of the first true birds.203 This creates the problem that scientists refer to as the time paradox.
8) Dromaeosuar: The Dinosaur That Evolutionists Were | 3.093484 |
Air pollution is a broad term applied to all chemical and biological agents that modify the natural characteristics of the atmosphere.
Some definitions also consider physical perturbations such as noise pollution, heat, radiation or light pollution as air pollution. Some definitions include the term harmful as a requisite to consider a change to the atmosphere as pollution.
The sources of air pollution are divided in two groups: anthropogenic (caused by human activity) and natural.
Natural sources include:
Anthropogenic sources are mostly related to burning different kinds of fuel. They include:
- Volcanic activity
- Dust from natural sources, usually large areas of land with little or no vegetation
- Gases, such as methane, emitted by the digestion of animals, usually cattle.
- Smoke from wildfires.
- Dust and chemicals from farming, especially of erodible land, see Dust Bowl
- Industrial activity in general.
- Vehicles with internal-combustion engines.
- Stoves and incinerators, specially coal ones.
- Paint fumes, or other toxical vapors.
Contaminants of air can be divided in particles and gases.
Particles are classified by their sizes. A usual division is in PM10 and PM2.5. PM10 are particles whose size is less than 10 microns (0.01 mm); they are dangerous to humans because they can be breathed and reach the lungs. PM2.5 are particles whose size is less than 2.5 microns (0.0025 mm), and they are even more dangerous because they can pass the alveoli and reach the blood.
Important pollutant gases include:
The worst single incident of air pollution to occur in the United States occurred in Donora, Pennsylvania
in late October, 1948
- Davis, Devra, When Smoke Ran Like Water: Tales of Environmental Deception and the Battle Against Pollution, Basic Books, 2002, hardcover, 316 pages, ISBN 0-465-01521-2 | 3.706181 |
Math is the basis for music, but for those of us who aren’t virtuosic at either, the connection isn’t always easy to grasp. Which is what makes the videos of Vi Hart, a “mathemusician” with a dedicated YouTube following, so wonderful. Hart explains complex phenomena--from cardioids to Carl Gauss--using simple (and often very) funny means.
As Maria Popova pointed out yesterday, Hart’s latest video is a real doozy. In it, she uses a music box and a Möbius strip to explain space-time, showing how the two axes of musical notation (pitch and tempo) correspond to space and time. Using the tape notation as a model for space-time, she cuts and folds it to show the finite ways you can slice and dice the axes. Then, she shows us how you can loop the tape into a continuous strip of twinkling notes:
If you fold space-time into a Mobius strip, you get your melody, and then the inversion, the melody played upside down. And then right side up again. And so on. So rather than folding and cutting up space-time, just cut and tape a little loop of space-time, to be played over, and over.
It’s a pretty magical observation, and it makes even me--the prototypical math dunce--wish I’d tried harder. Yet there’s still time: Hart works for the Khan Academy, a nonprofit that offers free educational videos about math, biology, and more. Check it out.
[H/t Brain Pickings] | 3.272755 |
The colors are the different echo intensities (reflectivity) measured in dBZ
(decibels of Z) during each elevation scan. "Reflectivity" is the amount of
transmitted power returned to the radar receiver. Reflectivity (designated by
the letter Z) covers a wide range of signals (from very weak to very strong).
So, a more convenient number for calculations and comparison, a decibel (or
logarithmic) scale (dBZ), is used.
The dBZ values increase as the strength of the signal returned to the radar
increases. Each reflectivity image you see includes one of two color scales. One
scale (far left) represents dBZ values when the radar is in clear air mode (dBZ
values from -28 to +28). The other scale (near left) represents dBZ values when
the radar is in precipitation mode (dBZ values from 5 to 75). Notice the
color on each scale remains the same in both operational modes, only the values
change. The value of the dBZ depends upon the mode the radar is in at the
time the image was created.
The scale of dBZ values is also related to the intensity of rainfall. Typically,
light rain is occurring when the dBZ value reaches 20. The higher the dBZ, the
stronger the rainrate. Depending on the type of weather occurring and the area
of the U.S., forecasters use a set of rainrates which are associated to the dBZ
These values are estimates of the rainfall per hour, updated each volume scan,
with rainfall accumulated over time. Hail is a good reflector of energy and will
return very high dBZ values. Since hail can cause the rainfall estimates to be
higher than what is actually occurring, steps are taken to prevent these high
dBZ values from being converted to rainfall. | 3.000335 |
#104604. Asked by madkeen4. (Apr 11 09 6:01 AM)
It is based on the tradition of giving gifts (a "Christmas box") to the less fortunate members of society.
The name derives from the fact that the day is traditionally marked by the giving of Christmas boxes, or gifts, to service workers (such as service staff, postal workers and trades people) in the United Kingdom.
"Ask FunTrivia" is for entertainment purposes only, and answers offered are unverified and unchecked by
FunTrivia. We cannot guarantee the accuracy or veracity of ANY statement posted. Feel free to post an updated
if you feel that an answer is inadequate or incorrect. Please
thoroughly research items where accuracy is important to you using multiple reliable sources. By accessing our
website, you agree to be bound by our terms of service. | 3.230406 |
Garden Talk: August 25, 2011
From NGA Editors
Native Bee Basics
Native bees are important and often under-appreciated pollinators. If you'd like to find out more about these helpful insects and what you can do to conserve and protect them on your property and in your community, start by reading about them in Bee Basics: An Introduction to Our Native Bees by Dr. Beatriz Moisset and Dr. Stephen Buchmann.
This forty-four page booklet, published by the USDA Forest Service and Pollinator Partnership, is available as a free download from the Pollinator Info website or can be ordered in a print version. With information on bee anatomy, nesting, and foraging needs, along with profiles of native bees and an extensive section on conservation and what you can do to help keep native bee populations healthy, the booklet provides a wealth of information written in an accessible manner. For those who want to delve deeper, there is a helpful resource section.
The Pollinator Info website that is offering the free download contains an interview with one of the co-authors of Bee Basics, along with extensive information on all kinds of pollinators.
To download Bee Basics and find out more about pollinators, go to: Pollinators Info.
Lingering Effects of Invasive Species
The ecological disruption caused by invasive plants species is a worldwide problem. The cost of the environmental and economic impact of these invaders is estimated to be in the neighborhood of $1.4 trillion annually! Much research is being done to come up with strategies to control the spread of undesirable plants and minimize their impact on natural ecosystems. Now new research suggests that simply removing invasive species may not return plant communities to their pre-invasion condition.
Part of developing control strategies for invasive plants involves understanding the characteristics that allow certain species to become invasive in the first place, factors such as freedom from natural enemies, disturbance in the environment, and the ability of plants to release substances that prevent competing plants from growing.
To study how the interactions between all of these factors affect the success of an invasive species, investigators from the University of California and the University of Wisconsin studied invasive velvetgrass, Holcus lanatus,(illustrated) and its effect on a native daisy, Erigeron glaucus, in California. As described in an article in Science Daily on August 10, 2011, they found that direct competition between velvetgrass and the daisy accounted for much of its initial success due to the dense growth of the grass and its abundant propagules.
But they also found that velvetgrass altered the structure of the native community of soil organisms, specifically the mycorrhizal fungi in the soil. This change reduced the benefits of the mycorrhizae to the native daisy without having any negative impact on the velvetgrass. And the changes in the soil community persisted even after the velvetgrass was removed, potentially affecting the reestablishment of the native plants.
These findings suggest that studying the negative effects invasive species have on the ecology of the soil has important implications for researchers who are looking at ways to mitigate their effects.
To read more about the effects of invasive plants even after removal, go to: Science Daily.
Move Gypsy Moth Free
The gypsy moth is an introduced insect that is one of the most destructive pests of trees and shrubs ever to reach our shores. Its immature stage, a dark, hairy caterpillar with rows of red and blue spots on its back, is a general feeder that devours more than 450 species of plants! The caterpillars feast on leaves, leaving defoliated plants weakened and perhaps even killed. This pest overwinters as inch-and-a-half long egg masses that look like a clumps of tan or buff-colored hairs on tree trunks, outdoor furniture, or the sides of buildings.
Native to Europe and Asia, the gypsy moth was accidentally introduced in the Boston area in the 1860's and has since spread to much of the eastern United States. There have also been some infestations on the West Coast that came from Asia. In an effort to keep this pest from spreading further, the USDA requires homeowners to inspect and remove gypsy moth egg masses from household goods prior to moving from an infested to a non-infested area.
If you have a move planned, first find out if you are in a gypsy moth-quarantined area by checking out the Your Move Gypsy Moth Free website. There you can also learn how to inspect your outdoor household articles such as lawn furniture, yard equipment, outdoor toys, and the like, for gypsy moth egg masses and remove them. Without checking, you can unwittingly bring the moth with you and risk harm to the landscape trees and shrubs and natural forests in your new community.
Print out a handy self-inspection checklist or download a brochure with all the information you need to move safely and comply with federal law. To hone your detection skills, you can even play the fun, on-line Bust-a-Moth game.
For more information, go to: Your Move Gypsy Moth Free.
Landscape Problem Solver
We all probably wish we had an experienced gardener we could call on for advice whenever problems arise in the garden. For those of us without such a fount of knowledge, the Landscape Problem Solver from the University of Maryland's Home Garden Information Center may be the next best thing.
This site offers photographic keys to help diagnose and solve plant problems, using integrated pest management principles. Choose from a list of broad categories, such as shade trees, vegetables, or houseplants. Then select the affected plant part from the drop-down menu. This brings up a photographic selection of symptoms. Choose the one that seems to fit and you get a page of information on the problem, its cause, and environmentally responsible ways to treat it. There is also information on how to look at a plant to best assess its symptoms, beneficial insects, and emerging pest threats.
The information has been put together with the Mid-Atlantic region as its focus, but there is lots of good information that will be of use to gardeners in other parts of the country.
To check out this great resource, go to Plant Diagnostics. | 3.62749 |
Tombs of the sacrificers?
A few tombs share the characteristic of containing a few very specific objects
: full warrior equipment including a sword in its sheath, spear, shield and also a knife or cleaver (butcher's blade) and an axe with an eye handle, together with one or more small buckets, a bronze pan, a toolbox, and fragments of amphorae. One or other of these objects may be missing, depending on the chronology, but the assembly is still remarkable
. The trilogy of weapon(s)-small bucket(s)-axe/cleaver is always found and the general destruction of the weapons is reminiscent of the Gauls' sacred sites.
One of the axes fits perfectly into the wound in the skull of the murdered man discovered at the settlement and this similarity, together with the specific sets of metal objects, could indicate the tombs of sacrificers
. The replacement of the axe with the butcher's cleaver, at a time when hundreds of ewes were ritually slaughtered at the settlement, is another argument for viewing these remains as those of men of religion. | 3.077941 |
14 October 2005
GSA Release No. 05-37
FOR IMMEDIATE RELEASE
Mars' Climate in Flux: Mid-Latitude Glaciers
New high-resolution images of mid-latitude Mars are revealing glacier-formed landscapes far from the Martian poles, says a leading Mars researcher.
Conspicuous trains of debris in valleys, arcs of debris on steep slopes and other features far from the polar ice caps bear striking similarities to glacial landscapes of Earth, says Brown University's James Head III. When combined with the latest climate models and orbital calculation for Mars, the geological features make a compelling case for Mars having ongoing climate shifts that allow ice to leave the poles and accumulate at lower latitudes.
"The exciting thing is a real convergence of these things," said Head, who will present the latest Mars climate discoveries on Sunday, 16 October, at the Annual Meeting of the Geological Society of America in Salt Lake City (specific time and location provided below).
"For decades people have been saying that deposits at mid and equatorial latitudes look like they are ice-created," said Head. But without better images, elevation data and some way of explaining it, ice outside of Mars' polar regions was a hard sell.
Now high-resolution images from the Mars Odyssey spacecraft's Thermal Emission Imaging System combined with images from the Mars Global Surveyor spacecraft's Mars Orbiter Camera and Mars Orbiter Laser Altimeter can be compared directly with glacier features in mountain and polar regions of Earth. The likenesses are hard to ignore.
For instance, consider what Head calls "lineated valley fill." These are lines of debris on valley floors that run downhill and parallel to the valley walls, as if they mark some sort of past flow. The same sorts of lines of debris are seen in aerial images of Earth glaciers. The difference is that on Mars the water ice sublimes away (goes directly from solid ice to gas, without any liquid phase between) and leaves the debris lines intact. On Earth the lines of debris are usually washed away as a glacier melts.
The lines of debris on Mars continue down valleys and converges with other lines of debris - again, just like what's seen on Earth where glaciers converge.
"There's so much topography and the debris is so thick (on Mars) that it's possible some of the ice might still be there," said Head. The evidence for present day ice includes unusually degraded recent impact craters in these areas - just what you'd expect to see if a lot of the material ejected from the impact was ice that quickly sublimed away.
Another peculiarly glacier-like feature seen in Martian mid-latitudes are concentric arcs of debris breaking away from steep mountain alcoves - just as they do at the heads of glaciers on Earth.
As for how ice could reach Mars lower latitudes, orbital calculations indicate that Mars may slowly wobble on its spin axis far more than Earth does (the Moon minimizes Earth's wobble). This means that as Mars' axis tilted to the extremes - up to 60 degrees from the plane of Mars' orbit - the Martian poles get a whole lot more sunshine in the summertime than they do now. That extra sun would likely sublime water from the polar ice caps, explains Head.
"When you do that you are mobilizing a lot of ice and redistributing it to the equator," Head said. "The climate models are saying it's possible."
It's pure chance that we happen to be exploring Mars when its axis is at a lesser, more Earth-like tilt. This has led to the false impression of Mars being a place that's geologically and climatically dead. In fact, says Head, Mars is turning out to be a place that is constantly changing.
WHEN AND WHERE
Lineated Valley Fill at the Dichotomy Boundary on Mars: Evidence for Regional Mid-Latitude Glaciation
Sunday, 16 October, 3:15 p.m. MDT, Salt Palace Convention Center Room 257
View abstract: http://gsa.confex.com/gsa/2005AM/finalprogram/abstract_94125.htm
Click photo for larger image with caption.
During the Geological Society of America Annual Meeting, 16-19 October, contact Ann Cairns at the GSA Newsroom, Salt Palace Convention Center, for assistance and to arrange for interviews: +1-801-534-4770.
- After the meeting contact:
- James Head III
- Department of Geological Sciences
- Brown University, Providence, RI
- Phone: +1-401-863-2526
- E-mail: [email protected] | 3.369046 |
The clock Command
The clock command has facilities for getting the current time, formatting time values, and scanning printed time strings to get an integer time value. The clock command was added in Tcl 7.5. Table 13-1 summarizes the clock command:
Table 13-1. The clock command.
|clock clicks||A system-dependent high resolution counter.|
|clock format value ?-format str?||Formats a clock value according to str.|
|clock scan string ?-base clock? ?-gmt boolean?||Parses date string and return seconds value. The clock value determines the date.|
|clock seconds||Returns the current time in seconds.|
The following command prints the current time:
clock format [clock seconds]
=> Sun Nov 24 14:57:04 1996
The clock seconds command returns the current time, in seconds since a starting epoch. The clock format command formats an integer value into a date string. It takes an optional argument that controls the format. The format strings contains % keywords that are replaced with the year, month, day, date, hours, minutes, and seconds, in various formats. The default string is:
%a %b %d %H:%M:%S %Z %Y
Tables 13-2 and 13-3 summarize the clock formatting strings:
Table 13-2. Clock formatting keywords.
|%%||Inserts a %. |
|%a||Abbreviated weekday name (Mon, Tue, etc.). |
|%A||Full weekday name (Monday, Tuesday, etc.). |
|%b||Abbreviated month name (Jan, Feb, etc.). |
|%B||Full month name. |
|%c||Locale specific date and time (e.g., Nov 24 16:00:59 1996).|
|%d||Day of month (01 ?31). |
|%H||Hour in 24-hour format (00 ?23). |
|%I||Hour in 12-hour format (01 ?12). |
|%j||Day of year (001 ?366). |
|%m||Month number (01 ?12). |
|%M||Minute (00 ?59). |
|%p||AM/PM indicator. |
|%S||Seconds (00 ?59). |
|%U||Week of year (00 ?52) when Sunday starts the week.|
|%w||Weekday number (Sunday = 0). |
|%W||Week of year (01 ?52) when Monday starts the week. |
|%x||Locale specific date format (e.g., Feb 19 1997).|
|%X||Locale specific time format (e.g., 20:10:13).|
|%y||Year without century (00 ?99).|
|%Y||Year with century (e.g. 1997).|
|%Z||Time zone name.|
Table 13-3. UNIX-specific clock formatting keywords.
|%D||Date as %m/%d/%y (e.g., 02/19/97).|
|%e||Day of month (1 ?31), no leading zeros. |
|%h||Abbreviated month name. |
|%n||Inserts a newline. |
|%r||Time as %I:%M:%S %p (e.g., 02:39:29 PM).|
|%R||Time as %H:%M (e.g., 14:39).|
|%t||Inserts a tab. |
|%T||Time as %H:%M:%S (e.g., 14:34:29).|
The clock clicks command returns the value of the system's highest resolution clock. The units of the clicks are not defined. The main use of this command is to measure the relative time of different performance tuning trials. The following command counts the clicks per second over 10 seconds, which will vary from system to system:
Example 13-1 Calculating clicks per second.
set t1 [clock clicks]
after 10000 ;# See page 218
set t2 [clock clicks]
puts "[expr ($t2 - $t1)/10] Clicks/second"
=> 1001313 Clicks/second
The clock scan command parses a date string and returns a seconds value. The command handles a variety of date formats. If you leave off the year, the current year is assumed.
Year 2000 Compliance
Tcl implements the standard interpretation of two-digit year values, which is that 70?9 are 1970?999, 00?9 are 2000?069. Versions of Tcl before 8.0 did not properly deal with two-digit years in all cases. Note, however, that Tcl is limited by your system's time epoch and the number of bits in an integer. On Windows, Macintosh, and most UNIX systems, the clock epoch is January 1, 1970. A 32-bit integer can count enough seconds to reach forward into the year 2037, and backward to the year 1903. If you try to clock scan a date outside that range, Tcl will raise an error because the seconds counter will overflow or underflow. In this case, Tcl is just reflecting limitations of the underlying system.
If you leave out a date, clock scan assumes the current date. You can also use the -base option to specify a date. The following example uses the current time as the base, which is redundant:
clock scan "10:30:44 PM" -base [clock seconds]
The date parser allows these modifiers: year, month, fortnight (two weeks), week, day, hour, minute, second. You can put a positive or negative number in front of a modifier as a multiplier. For example:
clock format [clock scan "10:30:44 PM 1 week"]
=> Sun Dec 01 22:30:44 1996
clock format [clock scan "10:30:44 PM -1 week"]
Sun Nov 17 22:30:44 1996
You can also use tomorrow, yesterday, today, now, last, this, next, and ago, as modifiers.
clock format [clock scan "3 years ago"]
=> Wed Nov 24 17:06:46 1993
Both clock format and clock scan take a -gmt option that uses Greenwich Mean Time. Otherwise, the local time zone is used.
clock format [clock seconds] -gmt true
=> Sun Nov 24 09:25:29 1996
clock format [clock seconds] -gmt false
=> Sun Nov 24 17:25:34 1996 | 3.75901 |
A recursive function typically contains a conditional expression which has three parts:
Recursive functions can be much simpler than any other kind of function. Indeed, when people first start to use them, they often look so mysteriously simple as to be incomprehensible. Like riding a bicycle, reading a recursive function definition takes a certain knack which is hard at first but then seems simple.
There are several different common recursive patterns. A very simple pattern looks like this:
(defun name-of-recursive-function (argument-list) "documentation..." (if do-again-test body... (name-of-recursive-function next-step-expression)))
Each time a recursive function is evaluated, a new instance of it is created and told what to do. The arguments tell the instance what to do.
An argument is bound to the value of the next-step-expression. Each instance runs with a different value of the next-step-expression.
The value in the next-step-expression is used in the do-again-test.
The value returned by the next-step-expression is passed to the new instance of the function, which evaluates it (or some transmogrification of it) to determine whether to continue or stop. The next-step-expression is designed so that the do-again-test returns false when the function should no longer be repeated.
The do-again-test is sometimes called the stop condition, since it stops the repetitions when it tests false. | 3.680707 |
Scientists at the UCL Institute of Child Health (ICH) have developed a new gene therapy that could have the potential to save the lives of children with a life threatening tumour called neuroblastoma. The technique, which uses novel tumour-homing nanoparticles has proved to be effective in a first stage trial in which researchers successfully targeted the tumours in a mouse model.
The details of the study are published online today in the international journal Biomaterials.
Stephen Hart, reader in molecular genetics at the ICH, explains: “It has long been a major technical challenge for medical researchers to use gene therapy to target this type of tumour, particularly when the cancer has spread. Now with the development of these novel nanoparticles in our laboratory, we’ve been able to deliver the genes to where they are needed, via an intravenous injection.”
Neuroblastoma is one of the most aggressive malignancies, affecting around 100 children each year in Britain. New treatments are urgently needed to tackle the disease, which is often fatal. Two thirds of children have widespread disease at diagnosis, making treatment even more challenging for specialist clinicians.
“In the mouse tumour model we have demonstrated that the nanoparticles can home in on tumours after injection into the blood stream, avoiding the liver, lung and spleen, organs that might otherwise remove the particles from the circulation. We have then used the nanoparticles to deliver a cargo of anti-tumour genes, which in turn stimulated the mouse’s immune cells to attack and destroy the tumour. We observed that tumour growth was slowed significantly and in a third of mice, tumours were eradicated completely, surviving long-term.”
“These nanoparticles are composed of peptides (small pieces of protein) and liposomes (fatty globules), as well as the therapeutic genes. Although similar to artificial viruses, the nanoparticles are safe and non-infectious.”
Dr Penelope Brock, consultant oncologist at Great Ormond Street Hospital said: “This is an extremely exciting breakthrough with enormous promise for improving clinical care of children and adolescents suffering from a very aggressive disease. I look forward to seeing results of early phase clinical trials.”
Dr Hart continues, “We now need to study the efficacy, safety and side-effects of the nanoparticles and hope that in the future our findings will translate into a viable treatment for some of the most challenging cases.”
For further information please contact Hayley Dodman, Great Ormond Street Hospital press office on 020 7239 3126 or email [email protected]
For genuine and urgent out of hours call speak to switchboard on 020 7405 9200 | 3.585804 |
Roman Catholic priests from the Order of the Sacred Hearts and led by Father Alexis Bachelot, first arrived from Europe in July of 1827. Three priests and three lay brothers celebrated the first mass of record on Hawaiian soil on July 14, 1827.
Under pressure by American Protestant missionaries, who considered Catholic doctrine a damning religious error, Kamehameha III twice expelled the Catholics. When priests reappeared in 1837 and again faced expulsion, the Sandwich Island Gazette newspaper came to the defense of religious freedom. The French in 1839 also brought pressure upon the king, and in that year Kamehameha III proclaimed a Declaration of Rights and Edict of Toleration that granted religious toleration throughout the Islands.
This was a period of fierce verbal attacks between Catholics and Protestants. The Catholic Mission wanted to have its own press. In 1841, it bought the Gazette’s old equipment and set up a print shop on the site of the present Our Lady of Peace Cathedral, but Father L. D. Maigret complained to his European superiors: “The Protestants have excellent presses of the new kind, while we have only a bad one, the characters of which do not work.” Maigret received a new press from Europe, and in 1852, the first Catholic newspaper appeared, He Mau Hana I Hanaia, Works Done, to begin a tradition of Catholic publication that continues to the present.
By Helen G. Chapin | 3.11311 |
THURSDAY, July 19 (HealthDay News) -- You may think of your birthday as only being important to your age and the possible presence of candles, cards and cake, but a new study suggests a link between your month of birth and longevity.
Researchers found that those who were born between September and November from the years 1880 to 1895 were more likely to reach the 100-year mark than their siblings who were born in March. The study does not prove a cause-and-effect link, just an association.
The meaning of the findings is unclear, and a researcher who studies lifespan called them mostly irrelevant to modern times.
But, Leonid Gavrilov, from the Center on Aging at the University of Chicago, who wrote the study with his wife, Natalia Gavrilova, said the findings point to the importance of the environment in which a child is conceived and later grows.
"We believe that avoiding any potential sources of damage to developing fetus and child may have significant effects on health in later life and longevity," Gavrilov said. "Childhood living conditions may have long-lasting consequences for health in later life and longevity."
The researchers looked at 1,574 centenarians -- people who reached the age of 100 -- in the United States. They found that those people born between September and November had about a 40 percent higher chance of living to 100 than those born in March.
Of course, the chances that people born in 1889-1895 would even reach the century mark was very low to begin with. Of those born in 1900 who were still alive at 50, just a third of 1 percent of men made it to 100, and just shy of 2 percent of women accomplished the feat, Gavrilov said.
Why might month of birth -- or month of conception -- affect how long someone lives? One possibility is that seasonal diseases played a role, Gavrilov said.
S. Jay Olshansky, a professor of public health at the University of Illinois at Chicago who's familiar with the findings, said the study is not newsworthy. "The results are probably valid, but largely irrelevant in our modern world since they apply to birth months from more than a century ago."
Regardless of the month someone was born or conceived, the odds are slim that you'll live to be 100. "This prospect has been rising through the 20th century, but not dramatically," Olshansky said.
At best, he said, "this research might offer a partial and extremely small explanation for a small fraction of why some people conceived and born more than a century ago lived for 100 years."
What does all this mean for your chances of living to 100 if you were born around the fall or -- perhaps less luckily -- in March? Good question -- and one that won't be answered until people around your age start hitting the century mark.
The study appeared in the Journal of Aging Research.
For more about healthy aging, try the U.S. National Library of Medicine.
SOURCES: Leonid Gavrilov, Ph.D., research associate, Center on Aging, University of Chicago; S. Jay Olshansky, Ph.D., professor, public health, University of Illinois at Chicago; 2011 Journal of Aging Research
Copyright © 2013
HealthDay. All rights reserved.
HealthDayNews articles are derived from
various sources and do not reflect federal policy. healthfinder.gov
does not endorse opinions, products, or services that
may appear in news stories. For more information on
health topics in the news, visit
Health News on healthfinder.gov. | 3.007624 |
|This is a measure of the brightness of a celestial object. The lower the value, the brighter the object, so magnitude -4 is brighter than magnitude 0, which is in turn brighter than magnitude +4. The scale is logarithmic, and a difference of 5 magnitudes means a brightness difference of exactly 100 times. A difference of one magnitude corresponds to a brightness difference of around 2.51 (the fifth root of 100).
The system was started by the ancient Greeks, who divided the stars into one of six magnitude groups with stars of the first magnitude being the first ones to be visible after sunset. In modern times, the scale has been extended in both directions and more strictly defined.
Examples of magnitude values for well-known objects are;
|Sun||-26.7 (about 400 000 times brighter than full Moon!)|
|Brightest Iridium flares||-8|
|Venus (at brightest)||-4.4|
|International Space Station||-2|
|Sirius (brightest star)||-1.44|
|Limit of human eye||+6 to +7|
|Limit of 10x50 binoculars||+9|
|Limit of Hubble Space Telescope||+30| | 4.210475 |
- HHMI NEWS
- SCIENTISTS & RESEARCH
- JANELIA FARM
- SCIENCE EDUCATION
- RESOURCES & PUBLICATIONS
noscripttags. Include a link to bypass the detection if you wish.
Investigate the specks, flecks, and particles in the air—with airborne junk detectors you can easily make. Directions are in this activity from HHMI’s Cool Science for Curious Kids.
Involve children in collecting leaves, rocks, and other natural items, and use the collections to teach children math and science skills. This resource from Oregon State University tells you what to do.
Use the insect fact sheet in this middle school curriculum from Clemson University to help children identify and classify insects they might find outdoors.
Taking a “virtual” field trip is almost as good as being outdoors. Explore a cove forest and a salt marsh with this program from Clemson University.
The HHMI Bulletin is now available for your iPad—inspiring stories, beautiful art, and MORE.
Read. Play. Listen.
Learn about the innovative work of biomedical researchers and science educators worldwide supported by the Howard Hughes Medical Institute.
Look for the FREE app in the iTunes App Store.
What Is Cool Science?
At Cool Science, we entertain questions of all kinds (Ask a Scientist). We encourage young scientists to get their hands dirty-virtually (Curious Kids). We offer high school and college students new approaches to cutting-edge science topics (BioInteractive). We provide educators with a host of innovative resources they can use in their classrooms (For Educators). We reveal what it takes to become a scientist (Becoming a Scientist). And we showcase an undergraduate science discovery project that may one day change the way science is taught (SEA).
We invite you to explore the many cool features of Cool Science.
Image: University of Washington
Help children study plants and animals in local outdoor settings by adapting some of the activities from this curriculum developed by Oregon State University.
Use these Yale University activities—which require simple, inexpensive, and easily obtainable materials—to help children learn about volcanoes, magnetism, and other topics.
When it’s too hot to be outdoors, educators, older students, and parents can try their hand at this visual and motor test that involves learning a new motor skill. This activity is from HHMI’s Biointeractive. | 3.552491 |
The Battle of Noyon-Montdidier, 9-13 June 1918, was the fourth of General Erich von Ludendorff’s great offensives of the spring and summer of 1918 that came close to breaking the Allied lines on the western front, but instead critically damaged the fighting capacity of the German army.
The first and third of those offensives (Second Battle of the Somme and Third Battle of the Aisne) had created two giant salients in the Allied lines. The Noyon-Montdidier offensive was designed to link these two saliants. This would straighten out the line and potentially threaten Paris. Two German armies – the Eighteenth under General Oskar von Hutier and the Seventh under General Max von Boehn were allocated to the attack. They were opposed by two French armies – the Third under General Georges Humbert and the Tenth under General Charles Mangin. The French also had access to American troops, who would play a part in defeating the offensive.
The French had sufficient warning of the German attack. On 9 June the German Eighteenth Army attacked the French Third Army from the north. Its attack was disrupted by a French counter-bombardment, but was still able to make some progress, although not on the same scale as in the earlier offensives.
The German Seventh Army joined the offensive on 10 June, attacking the French Tenth Army from the east. This attack failed to make any significant progress. The two armies were meant to meet at Compiègne, but only Hutier made any progress towards the rendezvous.
On 11 June the French and Americans launched a counter attack which pushed the Germans back from their most advanced positions. On 13 June the battle came to an end. It was a clear German failure, and was a clear sign that the German army was wearing down. It would launch one more offensive, on the Marne in mid July, but that would soon be followed by the great Allied counterattacks that would push the German armies back towards the French border.
||Save this on Delicious|
Help - F.A.Q. - Contact Us - Search - Recent - About Us - Subscribe in a reader
|Subscribe to History of War|
|Browse Archives at groups.google.co.uk| | 3.38714 |
The tomato sector in Ghana has failed to reach its potential, in terms of attaining yields comparable to other countries, in terms of the ability to sustain processing plants, and in terms of improving the livelihoods of those households involved in tomato production and the tomato commodity chain. Despite government interventions that include the establishment of a number of tomato processing factories, tomatoes of the right quality and quantity for commercial agroprocessing are not being grown. Many farmers still prefer to plant local varieties, typically with a high water content, many seeds, poor color, and low brix. Land husbandry practices are often suboptimal. Average yields remain low, typically under ten tons per hectare. Because of production seasonality, high perishability, poor market access, and competition from imports, some farmers are unable to sell their tomatoes, which are left to rot in their fields. Yet other farmers in Ghana have achieved higher tomato yields, production is profitable, and many farmers in Ghana continue to choose to grow tomatoes over other crops.
International Food Policy Research Institute (IFPRI) | 3.201036 |
One of the basic tenets of teaching is that the student must learn the basics and foundation of a subject in order for them to master it eventually and reach full human potential.
New research from the University of Missouri supports this notion, revealing that kids who understood numbers and quantity in the first grade were more likely to get good grades in math when they hit fifth grade.
“This study reinforces the idea that math knowledge is incremental, and without a good foundation, a student won’t do well because the math gets more complex,” said researcher David Geary. “The kids that can go back and forth easily and quickly in translating numerals, the number five, for example, into quantities and in breaking complex problems into smaller parts had a very good head start.”
The study involved 177 elementary school students from kindergarten. Researchers hope to follow the group until they reach 10th grade algebra classes in an attempt to gain a deeper understanding of how kids learn, especially when it comes to math. Additionally, the findings may help educators discover better methods of teaching.
Personal growth activities such as studying, doing homework and attending school are integral to a young person's development and can even set them on the right path toward a fulfilled life.
Philosopher, educator and trailblazer Ilchi Lee believes that human potential is limitless and that individuals can push the boundaries of their abilities with practice and hard work. Results of this study support such thoughts, providing further proof that the brain works gradually.
Students may want to consider ridding their minds of distractions and negativity before engaging in study sessions or attending class in order to reap the full benefits of education. | 3.882269 |
Leaf Characteristics (PK1) This set introduces simple vocabulary to describe the physical features of 40 North American tree, garden, and house plant leaves. First - The child sorts 9 leaf characteristics cards (3" x 4") onto 3 control cards (10-3/8” x 5¼”) that identify characteristics of Leaf Types, Leaf Veins, and Leaf Margins. Second - After learning the 9 characteristics of leaves, it is time to describe the 3 characteristics of just one leaf. A leaf card is selected from the 40 leaf cards provided (3" x 4"). The child selects the 3 characteristics cards (type, venation, margin) that describe that leaf, and places them on the blank Leaf Identification card (10-3/8” x 5¼”). Real leaves can be used in this exercise as well. Background information is included for the teacher.
Leaves (PK1C) This set consists of 40 DUPLICATE leaf cards (80 cards total). One group of 20 cards illustrates familiar leaves such as dandelion, marigold, and ivy. The second group illustrates common North American tree leaves such as oak, maple, and cottonwood. These are the same leaf cards found in In-Print for Children's “Leaf Characteristics” activity.
Flowers (FL1) This set is designed to help children recognize and to name 20 common flowers, many of which are commercially available throughout the year. This duplicate set of picture cards can be used in simple matching exercises, or in 3-part matching activities if one set is cut apart. The 40 photocards (3¼” x 4") are in full-color and laminated. Flower background information is included for the teacher.
Nuts (PK3) Nuts are nourishing snacks and learning how they grow will make eating them all the more fun! This set of 22 two-color cards (5½” x 3½”) of plant and nut illustrations represents eleven edible nuts/seeds. The child pairs the illustration cards of the nuts in their growing stage to the cards of the nuts in and out of their shells. Make the activity even more successful by bringing the real nuts into the classroom.
Kitchen Herbs & Spices (PK5) This set help children to learn about 20 plants that give us herbs and spices. The delicately drawn, 2-color illustrations clearly show the parts of the plants that give us edible leaves, seeds, stems, bark, bulbs, and berries. Create an aromatic and tasty exercise by having the children pair real herbs and spices with these cards (4½” x 6¼”).
Plants We Eat (PK9) Learn more about food plants and their different edible parts. This set classifies 18 plant foods into six groups: roots, stems, leaves, flowers, fruits, and seeds. A duplicate set of 18 labeled picture/definition cards (6" x 6") shows plants in their growing stage with only their “food” portion in color. One set of picture/ definition cards is spiral bound into 6 control booklets that include definitions of the root, stem, leaf, flower, fruit, and seed. The other set of picture/ definition cards are to be cut apart for 2 or 3-part matching exercises. Plant description cards can be used for “Who am I?” games with our plant picture cards or with real foods. Both cards and booklets are laminated.
Plants We Eat Replicards (PK9w) Six replicards are photocopied to produce worksheets for an extension exercise using our set Plants We Eat (PK9). Children color and label the worksheets, which illustrate three plant examples for each of the following groups: roots, stems, leaves, flowers, fruits, and seeds. The Plants We Eat booklets serve as controls. After worksheets (8½” x 11") are colored and labeled, they can be cut apart, stapled together, and made into six take-home booklets. These booklets may generate lively family dinner-table discussions: “A potato is a what?”
Plants - Who am I? (WP) This beginning activity for lower elementary strengthens both reading and listening skills, and provides children with simple facts about 10 plants. The set consists of duplicate, labeled picture cards with descriptive text and features plants different from those in the First Knowledge: Plant Stories (see below). The set of cards with text ending in “Who am I?” is cut apart into 10 picture cards, 10 plant name cards, and 10 text cards. The other set is left whole. Cards are used for picture-to-text card matching exercises and for playing the “Who am I” game. Cards measure 6½” x 4" and are in full color and laminated.
First Knowledge: Plant Stories (PK7) This set consists of 19 duplicate plant picture/text cards. One set is cut apart for 3-part matching activities, and the other set is placed in the green, 6-ring mini-binder labeled Plants. The teacher has the option of changing the cards in the binder as needed. The children can match the 3-part cards (6" x 3¾”) to the cards in the binder, practice reading, learn about the diverse characteristics of these plants, and then play “Who am I?” The eight angiosperms picture cards can be sorted beneath two cards that name and define Monocots and Dicots. These activities prepare children for later work with our Plant Kingdom Chart & Cards (see below), which illustrates the same plants.
Plant Kingdom Chart and Cards (PK6) Our 4-color plastic paper chart and cards represents the current classification of the plant kingdom (not illustrated here) – the same as is used in secondary and college level biology courses. This classification organizes the plant kingdom in a straightforward manner with simple definitions and examples under each heading. Firs the plants are categorized as either Nonvascular Plants (Bryophytes) or Vascular Plants. Then the Vascular plants are divided into two groups: Seedless Plants or Seed Plants. Seed Plants are divided into two groups: Gymnosperms and Angiosperms with sub-categories. Nineteen picture cards (2¼” x 3") illustrate the currently recognized phyla of the plant kingdom. Children match the 19 plant picture cards to the pictures on the chart (18" x 32"). Text on the back of the picture cards describes each plant. Advanced students can recreate the chart with the title cards provided, using the chart as a control of error. Background information is provided.
Parts of a Mushroom Parts of a gilled mushroom are highlighted and labeled on six 2-color cards (3" x 5"). Photocopy the Replicard (8½” x 11") to make quarter page worksheets. The child colors and labels the worksheets, using the picture cards as a guide. Completed worksheets can be stapled together to make a booklet for “Parts of a Mushroom”. (In-Print product code FK1)
Fungi (FK4) Members of the Fungus Kingdom have a wide variety of forms. Children see fungi everywhere, such as mold on food, or mushrooms on the lawn. This duplicate set of labeled picture cards shows 12 common fungi found indoors and out. Fungi illustrated: blue cheese fungus, bolete, coral fungus, cup fungus, jelly fungus, lichens, mildew, milky mushrooms, mold, and morel. Background information is included. Pictures cards (3½” x 4½”) are in full color and laminated.
Classification of the Fungus KingdomChart and Cards (FK3) This classification of the Fungus Kingdom organizes 18 representative fungi into four major groups and two important fungal partnerships: Chytrids, Yoke Fungi, Sac Fungi, Club Fungi, Lichens, Mycorrhizae. Children match the 18 picture cards (2-7/8” x 2-3/8”) to the pictures on the 2-color chart (18" x 16"). After this activity, they can sort the picture cards under the label cards for the 5 fungus groups, using the chart as the control. Description of each fungus type is printed on the back of the picture cards. Background information is included for the teacher. This chart is printed on vinyl and does not need to be laminated. | 3.970141 |
Education for Sustainable Diversity (ESD), as defined by the Office for Inclusion, is the practice of acquiring knowledge about and becoming aware of ways in which our beliefs and biases impact the quality of relationships among people from different cultural groups around the world.
The goal of the Office for Inclusion is to support a campus community that understands how to fully integrate the University’s core values of diversity and inclusion into campus work and learning environments. ESD serves to provide opportunities for people to better understand the complexities of and synergies between, the issues threatening our intercultural sustainability and assess their own values and those of the society in which they live in. ESD seeks to engage people in negotiating a sustainable future, making decisions and acting on them. To do this effectively, the following skills are essential to ESD:
- Envisioning -- being able to imagine a better future. The premise is if we recognize and begin to understand the issues that obstruct intercultural respect, we will be better able to work through issues and find ways to respect each other.
- Critical thinking and reflection -- learning to question our current belief systems and to recognize the assumptions underlying our knowledge, perspective and opinions. Critical thinking skills help people learn to examine social and cultural structures in the context of sustainable intercultural development.
- Systemic thinking -- acknowledging complexities and looking for links and synergies when trying to find solutions to problems.
- Building partnerships -- promoting dialogue and negotiation, learning to work together.
- Participation in decision-making -- empowering people.
- Interactive theatre uses customized and uniquely designed sketches performed by actors to address specific issues, and allow for open and guided discussion between the audience and actors to find solutions to issues.
- Customized small-group workshops, large-group presentations, and symposia featuring themes around diversity, inclusion and social justice.
- e-learning tools custom designed for large audiences on specific topics
- Topics include: MSU's anti-discrimination policies, diversity/inclusion, best practices
For more information or to request a service, please contact the main office at 517-353-3922. | 3.140936 |
Eggs are an excellent source of choline, a little-known but essential nutrient that contributes to fetal brain development and helps prevent birth defects. The National Academy of Sciences recommends increased choline intake for pregnant and breastfeeding women. Two eggs - including the yolks - contain about 250 milligrams of choline, or roughly half the recommended daily amount. The National Academy of Sciences recommends that pregnant women consume 450 milligrams of choline per day and that breastfeeding women consume 550 milligrams per day.
In addition to choline, eggs have varying amounts of three other nutrients that pregnant women need most. Eggs are a good source of the highest quality protein, which helps to support fetal growth. Eggs also have a B vitamin that is important for normal development of nerve tissue and can help reduce the risk of serious birth defects that affect the baby's brain and spinal cord development. The type of iron in eggs (a healthy mixture of heme and non-heme iron) is particularly well-absorbed, making eggs a good choice for pregnant and breastfeeding women who are at higher risk for anemia.
To learn more about choline and stay up-to-date on the latest research visit, www.cholineinfo.org. | 3.302787 |
There is perhaps no sound so recognizable as the first drone of a bagpipe. That sustained note acts as harmony to the melody, which is fingered by the piper on the chanter. These days, bagpipes are most commonly connected with the Scots, but pipes have a long history in Ireland, as well.
The Great Irish Warpipes, which are nearly identical to Scottish Highland bagpipes, appear in historical reference as early as the 1400s. As the name implies, the instrument was played by Irish soldiers as they marched into battle. In 1581, the Italian music theorist Vincenzo Galilei — Galileo’s father — described the bagpipe as “much used by the Irish: to its sound this unconquered fierce and warlike people march their armies and encourage each other to deeds of valor. With it they also accompany the dead to the grave making such sorrowful sounds as to invite, nay to compel the bystander to weep.”
While the Warpipes continued to be used in battle, by the 1700s, a new kind of pipe appeared in Ireland—the Uilleann Pipe (“pipes of the elbow”). An ancestor of the Pastoral Pipes, the Uilleann Pipe is smaller; its bellow is filled with air by pumping the elbow rather than blowing with the mouth; it can produce many notes thanks to its two-octave range; it is quieter and sweeter sounding than its warrior kin; and it is played sitting down. Its dulcet tones make it a lovely ensemble instrument and so it can be heard quite a lot in traditional Irish music. Perhaps the best known Uilleann piper of today is Paddy Moloney of The Chieftains, who performed to a sold-out crowd last month in Santa Barbara.
Although the Uilleann pipe is the national pipe of Ireland, its makeup doesn’t lend itself to leading the charge — or parades. Therefore, it is the Great Irish Warpipes or generally the more common Scottish bagpipe that folks see and hear in St. Patrick’s Day parades. In fact, this Paddy’s Day, several area bagpipers will be leading the annual Santa Barbara Independent St. Paddy’s Day Stroll down State Street. | 3.713411 |
|Primary Immunodeficiency (PI) affects as many as 1 million Americans and 10 million worldwide. Yet, it is just beginning to receive widespread attention.
When a defect in the immune system is carried through the genes, it is called a Primary Immunodeficiency. More than 150 Primary Immunodeficiency diseases have been identified to date. They range widely in severity. Primary Immunodeficiency diseases are characterized by infections that can often be recurring, persistent, debilitating, and chronic.
Everyone should know the following vital facts about PI:
The result of research in this area of immunology is that our progress is quickly yielding positive benefits for victims of cancer, AIDS, asthma, autoimmunity and a wide range of pulmonary and allergic conditions.
- Primary Immunodeficiency diseases can go undetected because they do not have unique symptoms of their own. Rather, they appear as "ordinary" infections, often of the sinuses, ears, or lungs. They can also cause gastrointestinal problems or inflammation of the joints. Families and doctors are often unaware that the troubling conditions they are dealing with are actually rooted in a defect of the immune system.
- The infections can be chronic. This means they keep coming back, sometimes frequently, and can be severe. They tend to require prolonged recovery, and the patient may respond poorly to a conventional course of antibiotics.
- The diseases can strike males and females of all ages, though they frequently present themselves early in life. The more severe immunodeficiency diseases are detected most frequently in children.
- Early diagnosis and treatment of Primary Immunodeficiency disease is essential to preventing the infections from causing permanent damage.
- Research in Primary Immunodeficiency is central to progress in immunology. As medical science further illuminates the complexities of the immune system, patients are benefiting from a host of cutting-edge diagnostic tools and treatments. The problems presented by genetic immunodeficiency disease have challenged researchers and immunologists to reach improved diagnoses, treatments, and innovative new therapies. Promising results are being reported for immunodeficient patients using intravenous gamma globulin, bone marrow transplantation, enzyme replacement and genetically engineered proteins such as gamma interferon.
How is Primary Immunodeficiency diagnosed?
Correct diagnosis of a PI disease begins with awareness of the 10 Warning Signs and the first step in diagnosing a Primary Immunodeficiency disease is a good evaluation. An immune system specialist (immunologist) can help with diagnosis and treatment. If you need help finding an immunologist, the
Find an Expert section of this website can help you locate a physician or medical center in your area.
Evaluation of the immune system may include:
At the time of the evaluation, your doctor will ask questions about your health. Frequent or unusual infections, prolonged diarrhea, and poor childhood growth are some symptoms of a possible Primary Immunodeficiency. Because some Primary Immunodeficiencies run in families, you may also be asked questions about your family history.
- Detailed medical history
- Physical exam
- Blood tests
- Vaccines to test the immune response
If a Primary Immunodeficiency is suspected, a series of blood tests and vaccines may be required. Blood tests will show if any part of the immune system is missing or not working properly. Vaccines may be given to test the immune system's response, i.e. its ability to fight invaders.
See the 4 Stages of Immunologic Testing for more on this process.
How is PI treated?
There are a variety of treatment choices for immunodeficient patients. At a minimum, the recurring infections can be treated with antibiotics. These can help prevent damage caused by chronic illness, improving a patient's chances for long-term survival while enhancing the quality of life.
Another important treatment intervention is antibody replacement therapy, often referred to as IVIG therapy. IVIG works by replacing the antibodies that the body cannot make on its own. IVIG is now an accepted treatment protocol for a range of Primary Immunodeficiency diseases. Individuals can learn more about antibody replacement therapy or IVIG, by visiting the following websites: Baxter Healthcare, CSL Behring, Grifols, and Octapharma.
In other cases, bone marrow transplants, gene therapy, or other alternative treatments may be appropriate.
What are the goals of treatment?
Doctors believe people with a Primary Immunodeficiency can lead active and full lives. A guiding objective of the Jeffrey Modell Foundation is to help people with PI regain or maintain control of their lives by:
Additional Information on PI
- Participating in work, school, family, and social activities;
- Decreasing the number and severity of infections;
- Having few, if any, side effects from medications and other treatments;
- Feeling good about themselves and their treatment program.
If you have been diagnosed with a PI disease, learn more about Living with PI.
Learn more about our PI Awareness Campaign and view or download posters, brochures, videos and other educational information on PI. Kids can visit our Kids Korner to learn more about PI.
Still have questions? Visit our FAQ section for valuable answers to your most frequently asked questions about PI, or get more information about Specific Defects. You may also want to view our Related Links to other websites involved with PI.
In addition, you or someone you know can access information and resources in English and Spanish through our toll-free hotline at 1-866-INFO-4-PI. | 3.639537 |
Republic of Panama
The southernmost of the Central American nations, Panama is south of Costa Rica and north of Colombia. The Panama Canal bisects the isthmus at its narrowest and lowest point, allowing passage from the Caribbean Sea to the Pacific Ocean. Panama is slightly smaller than South Carolina. It is marked by a chain of mountains in the west, moderate hills in the interior, and a low range on the east coast. There are extensive forests in the fertile Caribbean area.
Explored by Columbus in 1502 and by Balboa in 1513, Panama was the principal shipping point to and from South and Central America in colonial days. In 1821, when Central America revolted against Spain, Panama joined Colombia, which had already declared its independence. For the next 82 years, Panama attempted unsuccessfully to break away from Colombia. Between 1850 and 1900 Panama had 40 administrations, 50 riots, 5 attempted secessions, and 13 U.S. interventions. After a U.S. proposal for canal rights over the narrow isthmus was rejected by Colombia, Panama proclaimed its independence with U.S. backing in 1903.
For canal rights in perpetuity, the U.S. paid Panama $10 million and agreed to pay $250,000 each year, which was increased to $430,000 in 1933 and to $1,930,000 in 1955. In exchange, the U.S. got the Canal Zone—a 10-mile-wide strip across the isthmus—and considerable influence in Panama's affairs. On Sept. 7, 1977, Gen. Omar Torrijos Herrera and President Jimmy Carter signed treaties giving Panama gradual control of the canal, phasing out U.S. military bases, and guaranteeing the canal's neutrality.
Nicolas Ardito Barletta, Panama's first directly elected president in 16 years, was inaugurated on Oct. 11, 1984, for a five-year term. He was a puppet of strongman Gen. Manuel Noriega, a former CIA operative and head of the secret police. Noriega replaced Barletta with vice president Eric Arturo Delvalle a year later. In 1988, Noriega was indicted in the U.S. for drug trafficking, but when Delvalle attempted to fire him, Noriega forced the national assembly to replace Delvalle with Manuel Solis Palma. In Dec. 1989, the assembly named Noriega “maximum leader” and declared the U.S. and Panama to be in a state of war. In Dec. 1989, 24,000 U.S. troops seized control of Panama City in an attempt to capture Noriega after a U.S. soldier was killed in Panama. On Jan. 3, 1990, Noriega surrendered himself to U.S. custody and was transported to Miami, where he was later convicted of drug trafficking. Guillermo Endara, who probably would have won an election suppressed earlier by Noriega, was installed as president.
On Dec. 31, 1999, the U.S. formally handed over control of the Panama Canal to Panama. Meanwhile, Colombian rebels and paramilitary forces have made periodic incursions into Panamanian territory, raising security concerns. Panama has also faced increased drug and arms smuggling.
In May 2004 presidential elections, Martín Torrijos Herrera, the son of former dictator Omar Torrijos, won 47.5% of the vote. He took office in September.
Panamanians approved a plan to expand the Panama Canal in 2006. It will likely double the canal's capacity and is expected to be completed by2015.
Defying the current Latin American trend for left-leaning governments, Panama elected millionaire businessman Ricardo Martinelli as its president on May 3, 2009. After a period of rapid economic growth, Panama had succumbed to the global recession. Trading on his personal record of success—and utilizing his fortune to get his message out—Martinelli promised to encourage foreign investment and help the poor. | 3.547897 |
A box girder bridge is a bridge in which the beams are made up of box shaped girders. The box girders consist of concrete, steel or a combination of both. Most of the modern elevated structures are built on the basis of the box girder bridge. Most of us have travel on the box girder bridges regularly. The most common examples are the flyovers and the structures built for light rail transport. They can sustain heavy weight and hence used in the construction of flyovers. These are used to construct crossways for pedestrians. Box girders are used on cable styled bridges and other forms but generally they are a form of Beam Bridges. The support beams of the girders are tightly coupled together to form a hollow box. This hollow box can have the shape of a rectangle, tightly coupled triangles or trapezium. The box girder bridges offer a stronger support for bridges constructed with an arch. The most famous Bay Bridge in California, connecting San Francisco and Oakland is a classic example of the Box girder bridge.
The box girder bridges are made of prefabricated steel which are manufactured in the factory and assembled onsite. The girders can also be made up (EDIT : these are still being made so I think it should be are made of than can also be made of)of high performance, prestressed, reinforced concrete or a mixture of both concrete and steel. Both the girders and bridges can be manufactured elsewhere and installed at a different place as they are prefabricated. That is the beauty of this construction. The bridge can be installed in its position by incremental launching. Huge cranes are used to place the new segments onto the completed portions of the bridge until the whole structure is completely assembled.
Box girder bridges have a number of key advantages when compared to the I- beam girders. Box girders offer better resistance to torsion. This is beneficial especially for a curved bridge. Larger girders and stronger flanges can be used as this allows for a longer span between the support posts that hold the bridge. The huge hollow boxes can be used to place the Water lines, telephone cables and other utility lines.
Box girders are more expensive to fabricate and difficult to maintain because of the need of a confined space inside the box. There were major disasters that occurred when the West Gate Bridge of Australia and Cleddau Bridge of UK collapsed. But these disasters gave rise to a new design in the box girder bridges. Bridges being constructed these days are being designed keeping in mind the occurrences of earthquakes as well. They are designed to withstand even high magnitude earthquake.
One of the major threats is the corrosion of steel cables inside the bridge. Recently in 2009, an inspection was conducted on the Cline Avenue Bridge over the Indiana harbor and the ship canal. This inspection revealed a serious corrosion of the steel cables and the steel within the girders of the box due to water seepage. It was so badly corroded that there was no solution to this and the bridge had to be closed down permanently. Therefore these bridges have to be checked regularly and serviced to prevent corrosion. Using high grade steel and corrosion free metals or alloys in the construction of box girder bridges can help in preventing the bridges from getting corroded. | 3.456295 |
Scientists gets further evidence that Mars once had oceans
Mars, our neighbor, once the dreams of science fiction writers and astronomers, one of which only wrote about the live that could have lived on Mars, and still might; while the other seeks to prove that there might actually have been life on that red planet eons ago.
Part of proving that idea is being able to show that there was water on the surface of Mars, water that would have been the foundation of life, just as it is here on earth.
To help find the facts behind whether there was, or even still is, water on Mars the European Space Agency (ESA) Mars Express space craft which houses the Mars Advanced Radar for Subsurface and Ionsphere Sounding (MARSIS) has detected sediment on the planet, the type of sediment that you would find on the floor of an ocean.
It is within the boundaries of features tentatively identified in images from various spacecraft as shorelines that MARSIS detected sedimentary deposits reminiscent of an ocean floor.
“MARSIS penetrates deep into the ground, revealing the first 60 – 80 meters (197 – 262 ft) of the planet’s subsurface,” says Wlodek Kofman, leader of the radar team at the Institut de Planétologie et d’Astrophysique de Grenoble (IPAG). “Throughout all of this depth, we see the evidence for sedimentary material and ice.”
The sediments detected by MARSIS are areas of low radar reflectivity, which typically indicates low-density granular materials that have been eroded away by water and carried to their final resting place.
Scientists are interpreting these sedimentary deposits, which may still be ice-rich, as another indication that there once an ocean in this spot.
At this point scientists have proposed that there were two main oceans on the planet. One was aroun the 4 billion year ago range with the second at around 3 billion years ago.
For the scientist the MARSIS findings provide some of the best evidence yet that Mars did have large bodies of water on its surface and that the water played a major role in the planet’s geological history. | 4.018081 |
The Current Surface Analysis map shows current weather conditions
, including frontal and high/low pressure positions, satellite infrared
(IR) cloud cover
, and areas of precipitation
. A surface weather analysis is a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations. Weather maps are created by plotting or tracing the values of relevant quantities such as sea level pressure, temperature
, and cloud cover
onto a geographical map to help find synoptic scale features such as weather fronts.
The first weather maps in the 19th century were drawn well after the fact to help devise a theory on storm systems. After the advent of the telegraph, simultaneous surface weather observations
became possible for the first time, and beginning in the late 1840s, the Smithsonian Institution became the first organization to draw real-time surface analyses. Use of surface analyses began first in the United States, spreading worldwide during the 1870s. Use of the Norwegian cyclone model for frontal analysis began in the late 1910s across Europe, with its use finally spreading to the United States during World War II.
Surface weather analyses have special symbols which show frontal systems, cloud cover
, or other important information. For example, an H may represent high pressure, implying good and fair weather. An L on the other hand may represent low pressure, which frequently accompanies precipitation
. Various symbols are used not just for frontal zones and other surface boundaries on weather maps, but also to depict the present weather at various locations on the weather map. Areas of precipitation
help determine the frontal type and location. | 3.978109 |
Breakthroughs bring the next two major leaps in computing power into sight
Breakthroughs might make quantum computing, replacement for silicon practical within a decade
One of the best things about covering technology is that you're always on the edge of a completely new generation of stuff that will make everything completely different than it ever was before, even before the last generation made everything different.
"Completely different" always seems pretty much the same, with a few more complications, higher costs and a couple of cool new capabilities, of course.
Unless you look back a decade or two and see that everything is completely different from the way it was then…
Must be some conceptual myopia that keeps us in happy suspense over the future, nostalgic wonder at the past and bored annoyance with the present.
The next future to get excited about is going to be really cool, though.
You know how long scientists have been working on quantum computers that will be incomparably more powerful than the ones we have now because don't have to be built on a "bit" that's either a 1 or a zero? They would use a piece of quantum data called a qubit (or qbit, consistent with everything in the quantum world, the spelling wants to be two things at once), that can exist in several states at the same time. That would turn the most basic function in computing from a toggle switch to a dial with many settings.
Multiply the number of pieces of data in the lowest-level function of the computer and you increase its power logarithmically.
Making it happen has been a trick; they've been under development for 20 years and probably won't show up for another 10.
Teams of Austrian scientists may cut that time down a bit with a system they developed they say can create digital models of quantum-computing systems to make testing and development of both theory and manufacturing issues quicker and easier.
They did it the same way Lord of the Rings brought Gollum to life: putting a living example in front of cameras and taking detailed pictures they could use to recreate the image in any other digital environment.
Rather than an actor, the photo subject was a calcium atom, drastically cooled to slow its motion, then manipulated it using lasers, putting it through a set of paces predicted by quantum-mechanical theory, and recorded the results.
Abstracting those results lets the computer model predict the behavior of almost any other quantum particle or environment, making it possible to use the quantum version of a CAD/CAM system to develop and test new approaches to the systems that will actually become quantum computers, according to a paper published in the journal Science by researchers from the University of Innsbruck and the Institute for Quantum Optics and Quantum Information (IQOQI).
Far sooner than quantum computers will blow our digitized minds, transistors made from grapheme rather than chunkier materials will allow designers to create processors far more dense – and therefore more potentially powerful – than anything theoretically possible using silicon and metallic alloys we rely on now.
Graphene is a one-atom-thick layer of carbon that offers almost no resistance to electricity flowing through it, but doesn't naturally contain electrons at two energy levels, as silicon does. Silicon transistors flip on or off by shifting electrons from one energy level to another.
Even silicon doesn't work that way naturally. It has to be "doped" with impurities to change its properties as a semiconductor.
For graphene to work the same way, researchers have to add inverters that that mimic the dual energy levels of silicon. So far they only work at 320 degrees below zero Fahrenheit (77 degrees Kelvin).
Researchers at Purdue's Birck Nanotechnology Center built a version that operates at room temperature, removing the main barrier to graphene as a practical option for computer systems design
The researchers, led by doctoral candidate Hong-Yan Chen presented their paper at the Device Research Conference in Santa Barbara. Calif. in June to publicize their results with the inverter.
Real application will have to wait for Chen or others to integrate the design into a working circuit based on graphene rather than silicon.
Systems built on graphene have the potential to boost the computing power of current processors by orders of magnitude while reducing their size and energy use, but only if they operate in offices not cooled to 77 degrees Kelvin.
It will still be a few years before graphene starts showing up in airline magazines, let alone in IT budgets. We'll probably be tired of them, too, by the time quantum computers show up, but there's just no satisfying some people.
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | 3.120441 |
And Wallace was impetuous. While Darwin fully understood the implications of his theory, holding back publication because he knew he would upset believers, including his wife, Wallace plunged in, happy to upset society. He didn’t give a damn, said Jonathan Rosen, in an essay on Wallace in the New Yorker magazine last year. “This utter independence from public opinion is one of several reasons that he has all but vanished from popular consciousness.” In addition, Wallace believed in spiritualism (which Darwin and his friends detested) and later campaigned against vaccination. “Wallace was an admirable man and was almost saintly in his treatment of others,” says David Attenborough. “However, as a scientist, he was no match for Darwin. Wallace came up with the idea of natural selection in a couple of weeks in a malarial fever. Darwin not only worked out the theory, he amassed swathes of information to support it.” This point is backed by historian Jim Endersby. “Natural selection was a brilliant idea but it was the weight of evidence, provided by Darwin, that made it credible. That is why we remember Darwin as its principal author.” On his round-the-world voyage on the Beagle, between 1831 and 1836, he had filled countless notebooks with observations, particularly those of the closely related animals he saw on the different islands of the Galapagos. And then, in his vast garden at Downe, Darwin had crossbred orchids, grown passionflowers and on one occasion played a bassoon to earthworms to test their response to vibrations. He collected masses of data about plant and animal breeding to support his arguments in “The Origin of Species.” Wallace could provide nothing like this.
This has not stopped accusations that Darwin and his supporters used some very dirty tricks indeed to scupper Wallace. According to these ideas, Darwin received Wallace’s paper from Ternate several weeks earlier than he later claimed, filched its contents and then used them as his own in “The Origin of Species.” This argument is outlined in two American books — by Arnold Brackman and by John Langdon Brooks — that were published 20 years ago and depict Darwin as an unscrupulous opportunist and intellectual thief. Neither book provides anything like a convincing case, however, and the vast majority of academics have since concluded their claims are neither fair nor credible.
As Wallace’s own biographer Peter Raby concludes: “Never has an intriguing theory been built on slenderer evidence. As for the human factor, there is nothing in Darwin’s life to suggest that he was capable of such massive intellectual dishonesty, even if he was not especially generous in acknowledging his sources and debts.”
Indeed, historians argue that had it not been for Darwin, the idea of natural selection would have suffered grievously. If he had not been the first to develop natural selection, and Wallace had been the one to get the kudos and attention, the theory would have made a very different impact. “In the end, Wallace came to believe evolution was sometimes guided by a higher power,” adds Endersby, who has edited the forthcoming Cambridge University Press edition of “The Origin of Species.”
“He thought natural selection could not account for the nature of the human mind and claimed humanity was affected by forces that took it outside the animal kingdom.” This is perilously close to the idea of Intelligent Design, the notion — put forward by modern creationists — that a deity had a hand in directing the course of evolution. By contrast, Darwin’s vision was austere and indicated humanity as a mere “twig on the enormously arborescent bush of life which, if replanted from seed, would almost surely not grow this twig again,” as Stephen Jay Gould describes it. According to Darwin, there are no get-out clauses for humans. We are as bound to the laws of natural selection as a bacterium or a tortoise.
The roots of this unforgiving doctrine have a very human face, however. Darwin meshed his life and career tightly together. He was a family man to his core and while he was grief-stricken by the death of baby Charles in 1858, he had been left utterly shattered by the death from tuberculosis of his 10-year-old daughter, Annie, in 1851, as his great-great grandson, Randal Keynes points out in his book “Annie’s Box: Charles Darwin, his daughter and human evolution.”
Mustard poultices, brandy, chloride of lime and ammonia were all that medicine could then offer Annie when she started to sicken. None had any effect on her worsening bouts of vomiting and delirium until Annie “expired without a sigh” on April 23, 1851, Darwin recalled. “We have lost the joy of the household and the solace of our old age.” Keynes argues persuasively that Annie’s death had a considerable impact on Darwin’s thinking. “In her last days, he had watched as her face was changed beyond recognition by the emaciation of her fatal illness. You could only understand the true conditions of life if you held on to a sense of the true ruthlessness of natural forces.”
Thus Darwin’s eyes had been opened to the unforgiving processes that drive evolution. “We behold the face of nature bright with gladness,” he wrote years later. “We do not see, or we forget, that the birds which are idly singing around us mostly live on insects or seeds, and are thus constantly destroying life, or we forget how largely these songsters, or their eggs, or their nestlings, are destroyed by birds or beasts of prey.”
Or as he wrote elsewhere: “All Nature is war.” This pitiless vision — which stressed blind chance as the main determiner in the struggle for survival and the course of evolution — was upsetting for Victorians who put such faith in self-help and hard work. Nevertheless, this is the version of natural selection that has since been supported by a century and a half of observation and which is now accepted by virtually every scientist on Earth.
I t has not been a happy process, of course. Even today, natural selection holds a special status among scientific theories as being the one that it is still routinely rejected and attacked by a significant — albeit small — segment of society, mainly fundamentalist Christians and Muslims. Such individuals tend to have few views on relativity, the Big Bang, or quantum mechanics, but adamantly reject the idea that humanity is linked to the rest of the animal world and descended from ape-like ancestors.
“Twenty years ago, this was not a problem,” says Steve Jones, a professor of genetics at University College London. “Today, I get dozens of students who ask to be excused lectures on evolution because of their religious beliefs. They even accuse me of telling lies when I say natural selection is backed by the facts. So I ask if they believe in Mendel’s laws of genetics? They say yes, of course. And the existence of DNA? Again, yes. And genetic mutations? Yes. The spread of insecticide resistance? Yes. The divergence of isolated populations on islands? Yes. And do you accept that 98 percent of DNA is shared by humans and chimps? Again yes. So what is wrong with natural selection? It’s all lies, they say. It beats me, frankly.”
This dismay is shared by Dawkins. “These people claim the world is less than 10,000 years old, which is wrong by a great many orders of magnitude. Earth is several billion years old. These individuals are not just silly, they are colossally, staggeringly ignorant. I am sure sense will prevail, however.” And Jones agrees. “It’s a passing phase. In 20 years, this nonsense will have gone.” Natural selection is simply too important for society to live without it, he argues. It is the grammar of the living world and provides biologists with the means to make sense of our planet’s myriad plants and animals, a view shared by Attenborough whose entire Life on Earth programs rests on the bed-rock of Darwinian thinking.
“Opponents say natural selection is not a theory supported by observation or experiment; that it is not based on fact; and that it cannot be proved,” Attenborough says. “Well, no, you cannot prove the theory to people who won’t believe in it any more than you can prove that the Battle of Hastings took place in 1066. However, we know the battle happened then, just as we know the course of evolution on Earth unambiguously shows that Darwin was right.” | 3.050294 |
No Holiday As Joyous
Tu B’Av (The Fifteenth of Av) is no longer the well-known holiday on the Jewish calendar that it was in ancient times. In fact, in Talmudic times it was said: “There were no holidays so joyous for the Jewish People as the Fifteenth of Av…” (Ta’anit 26b).
On Tu B’Av, the unmarried maidens of Jerusalem would go out to the vineyards to dance together under the gaze of the unmarried men (sort of a Sadie Hawkins Day!). Each young lady would be dressed in white clothing borrowed from her neighbor so that those who came from wealthy families would not stand out and none would be embarrassed.
As they danced, the ladies would call out: “Young man, lift your eyes and choose wisely. Don’t look only at physical beauty–look rather at the family [values], ‘For charm is false, and beauty is deceitful. A God-fearing woman is the one to be praised…’” (Proverbs 31:30).
While in ancient times the same ceremony also took place on Yom Kippur, the day of Tu B’Av was specifically set aside for this celebration because it was the anniversary of the date on which inter-tribal marriages were permitted after the Israelites had entered the Land of Israel.
Today is Tu B’Av.
Copyright © 2011 National Jewish Outreach Program. All rights reserved.Email this post | 3.000962 |
The Jewish Month
By Chaim Issacson
The world has two time measuring cycles, the sun and the moon. As all know, the sun takes some 365 and a quarter days to make a yearly cycle. The moon, on the other hand, makes a cycle every 29 and a half days. This means that it difficult to measure years using the lunar cycle.
The world is divided as to how to use these two cycles. Whereas the western world uses the solar year, the Muslim world uses the lunar year. The Muslims add twelve months together and that is a year. Therefore it happens that a Muslim baby born in the winter will celebrate an adult birthday in the summer since the lunar year is some 354 days and the solar year is 365. Each year the Muslim calendar "loses" 11 days or his year "slips" back by 11 days in relation to the solar calendar. Whereas the Christian holidays always fall in the same date according to the solar calendar, they have no relation to the lunar influence. The Muslim holiday, on the other hand, Ramidan, for example, will move through the yearly cycle, but be fixed by the moon.
We Jews, of course, have our own way of fixing the calendar. We use a combo of solar and lunar measurements.
One of the first commandments that was given to the Jews, even before they left Egypt, was to fix the calendar to observe the month of the Passover in the appointed time, the spring. Thus, the requirement of having a monthly calendar that is adjusted to meet the solar cycle is a Biblical injunction.
It should not be overlooked that two of the major Jewish festivals begin on the full moon, Passover and Succoth. Rosh Hashanah, the Jewish New Year is different, it is the first day of the seventh month.
The Jewish month is based on the seeing the new moon. When two persons saw the new moon they would come to the Jewish High Court and give testimony that they had saw the moon. This did not mean that the court did not know when the moon was scheduled to appear, but rather that the sanctification of the moon had to be made according to eyewitness reports. The court would question the witnesses to ascertain that they indeed did see the moon. Afterwards, the court proclaimed that a new month had begun.
Even if the court knew by its own calculations that the new moon was scheduled to appear on a certain day, but due to weather conditions it was not seen, they did not declare the new moon until the next day when witnesses arrived. It was conceivable that the new month would be delayed a day. Also the courts were empowered to add an extra month to the yearly cycle to insure that the holiday of Passover would be in the spring. For this purpose they would add another month to the yearly cycle. The year would then have thirteen months (as in this year 2000, or as we say, 5760).
Since the spotting of the moon and declaration of the new month had ramifications as to when the Jewish holidays would be, many tried to thwart the actions of the courts in order to prevent the Jews from observing their holidays. Aside from dishonest 'witnesses', the ancient Greeks, during the time of Chanukah, tried to prevent the Jews from declaring new months through various decrees. This was in hopes to prevent the Jews from observing their holidays.
In addition, during the time after the first exile, when many Jews still resided in Babylon, the courts set up a relay system of lighting fires on the tops of mountains. In this manner the declaration of the month was related quickly to the Jews in the Diaspora so that they could observe the festival of Passover and the holy fast of Yom Kippur in its proper time. The Samaritans, who inhabited the mountainous area, would lite fires in order to confuse the Jews in the Diaspora. The courts were then obligated to send out messengers to bring the news to those Jews who lived so far away.
Although the new moon is not a 'religious' holiday, it does have religious significance. The day, during the time of the Holy Temple, was marked with an additional sacrifice. Today, a special prayer called "Hallel," or praise, is said and many have the custom to eat something special in honor of the new month.
In olden times, we had a court that would declare the new month according to the sightings. This court was made up of judges who were empowered through a direct chain from Moses. This was a requirement. Today, no one has such authority to declare a new month. Thanks to the farsightedness of Rabbi Hillel the Prince, who was the last of the princes from the house of David, we have a calendar that has all of the months and holidays figured until the Jewish year 6000. According to the Jewish tradition, the Messiah will come by the year 6000 and the continuation of the calendar will be addressed then.
Rabbi Hillel, who lived during the turbulent time of the destruction of the Temple, saw that the troubles of the Jewish people were increasing and the ability of the courts became diminished. Using the calculations that were known to the Jewish sages from the time of Moses, he publicized the calendar through his efforts it was accepted. This calendar is the one we use today. This calendar basically uses a 19-year cycle of twelve regular (12 month) years and seven intercalated (13 month) years.
The order of the intercalated years in each cycle is the 3rd, 6th, 8th, 11th, 14th, 17th, and 19th years. The month that is added is the month of Adar (the month that precedes the month that has the Passover holiday). The year then has two Adars, Adar 1 and Adar 2. The holiday of Purim is then celebrated in the second Adar.
The Jewish people are often compared in rabbinical literature to the moon. Just as the moon has periods of being full and being lacking, so do the Jewish people. Just as the moon receives its light from the sun, so the Jewish people receive their sustenance from G-d directly.
from the February 2000 Edition of the Jewish Magazine | 3.419994 |
When you are injected with the flu vaccine, your body reacts as if it has been infected with the actual living virus and makes antibodies that provide immunity against the real virus. These antibodies remain at high levels for only six to nine months. These waning antibody levels are one reason why you need to be revaccinated each year.
The main reason you should be revaccinated yearly is the flu virus is constantly changing and evolving into new strains. Each year the CDC attempts to predict which flu strain will be predominant. The CDC works with vaccine manufacturers to produce the specific vaccine that will combat the predicted strain.
If you are concerned about the cost of a flu immunization, check with your local health department for locations in your area where free flu shots are given.
Treating yourself at home
When you are exposed to the flu, the virus incubates for three to five days before symptoms begin. You probably have the flu if you come down with a high fever, sore throat, muscle aches, a runny or stuffy nose, and a cough (usually dry). The symptoms in children may also include vomiting, diarrhea and ear infections. Flu is usually self-treatable but has to run its course. You can treat symptoms by getting bed rest, drinking plenty of fluids, taking acetaminophen for aches and pains, and using a humidifier to keep nasal passages moist.
Expect the flu to last about five days, which is the time it takes your body to produce the antibodies that finally beat the infection. You will be protected from that strain of influenza for the rest of the season. Some people continue to feel ill and cough for more than two weeks. In some cases, the flu can make health conditions such as asthma or diabetes worse or lead to complications such as bacterial pneumonia. Adults older than 65 and people with chronic health conditions have the greatest risk for complications from the flu, the CDC says.
Antiviral medications are also recommended to treat the flu--amantadine, rimantadine, zanamivir and oseltamivir--but must be taken within the first two days of illness to be effective, the CDC says. They can reduce the length of time flu symptoms are present. These medications usually are used in hospitals, nursing homes and other institutions where people are at high risk for complications of the flu. Some side effects may result from taking these medications, such as nervousness, lightheadedness, or nausea. Individuals with asthma or chronic obstructive pulmonary disease are cautioned about using zanamivir. Talk to your health care provider if you think you should take one of these medications. These medications are not meant as a substitute for vaccination. | 3.453662 |
Kidney Stones in Children
On this page:
- What is a kidney stone?
- What is the urinary tract?
- Are kidney stones common in children?
- What causes kidney stones in children?
- What are the signs and symptoms of kidney stones in children?
- What types of kidney stones occur in children?
- How are kidney stones in children diagnosed?
- How are kidney stones in children treated?
- How are kidney stones in children prevented?
- Eating, Diet, and Nutrition
- Points to Remember
- Hope through Research
- For More Information
What is a kidney stone?
A kidney stone is a solid piece of material that forms in a kidney when substances that are normally found in the urine become highly concentrated. A stone may stay in the kidney or travel down the urinary tract. Kidney stones vary in size. A small stone may pass out of the body causing little or no pain. A larger stone may get stuck along the urinary tract and can block the flow of urine, causing severe pain or blood that can be seen in the urine.
What is the urinary tract?
The urinary tract is the body’s drainage system for removing wastes and extra water. The urinary tract includes two kidneys, two ureters, a bladder, and a urethra. The kidneys are a pair of bean-shaped organs, each about the size of a fist and located below the ribs, one on each side of the spine, toward the middle of the back. Every minute, a person’s kidneys filter about 3 ounces of blood, removing wastes and extra water. The wastes and extra water make up the 1 to 2 quarts of urine an adult produces each day. Children produce less urine each day; the amount produced depends on their age. The urine travels from the kidneys down two narrow tubes called the ureters. The urine is then stored in a balloonlike organ called the bladder. When the bladder empties, urine flows out of the body through a tube called the urethra at the bottom of the bladder.
Are kidney stones common in children?
No exact information about the incidence of kidney stones in children is available, but many kidney specialists report seeing more children with this condition in recent years. While kidney stones are more common in adults, they do occur in infants, children, and teenagers from all races and ethnicities
What causes kidney stones in children?
Kidney stones can form when substances
in the urine—such as calcium, magnesium,
oxalate, and phosphorous—become highly
concentrated due to one or more causes:
- Defects in the urinary tract may block the flow of urine and create pools of urine. In stagnant urine, stone-forming substances tend to settle together into stones. Up to one-third of children who have stones have an anatomic abnormality in their urinary tract.
- Kidney stones may have a genetic cause. In other words, the tendency to form stones can run in families due to inherited factors.
- An unhealthy lifestyle may make children more likely to have kidney stones. For example, drinking too little water or drinking the wrong types of fluids, such as soft drinks or drinks with caffeine, may cause substances in the urine to become too concentrated. Similarly, too much sodium, or salt, in the diet may contribute to more chemicals in the urine, causing an increase in stone formation. Some doctors believe increases in obesity rates, less active lifestyles, and diets higher in salt may be causing more children to have kidney stones.
- Sometimes, a urinary tract infection can cause kidney stones to form. Some types of bacteria in the urinary tract break down urea—a waste product removed from the blood by the kidneys—into substances that form stones.
- Some children have metabolic disorders that lead to kidney stones. Metabolism is the way the body uses digested food for energy, including the process of breaking down food, using food’s nutrients in the body, and removing the wastes that remain. The most common metabolic disorder that causes kidney stones in children is hypercalciuria, which causes extra calcium to collect in the urine. Other more rare metabolic conditions involve problems breaking down oxalate, a substance made in the body and found in some foods. These conditions include hyperoxaluria, too much oxalate in the urine, and oxalosis, characterized by deposits of oxalate and calcium in the body’s tissues. Another rare metabolic condition called cystinuria can cause kidney stones. Cystinuria is an excess of the amino acid cystine in the urine. Amino acids are the building blocks of proteins.
What are the signs and symptoms of kidney stones in children?
Children with kidney stones may have pain while urinating, see blood in the urine, or feel a sharp pain in the back or lower abdomen. The pain may last for a short or long time. Children may experience nausea and vomiting with the pain. However, children who have small stones that pass easily through the urinary tract may not have symptoms at all.
What types of kidney stones occur in children?
Four major types of kidney stones occur in
- Calcium stones are the most common type of kidney stone and occur in two major forms: calcium oxalate and calcium phosphate. Calcium oxalate stones are more common. Calcium oxalate stone formation has various causes, which may include high calcium excretion, high oxalate excretion, or acidic urine. Calcium phosphate stones are caused by alkaline urine.
- Uric acid stones form when the urine is persistently acidic. A diet rich in purines—substances found in animal proteins such as meats, fish, and shellfish—may cause uric acid. If uric acid becomes concentrated in the urine, it can settle and form a stone by itself or along with calcium.
- Struvite stones result from kidney infections. Eliminating infected stones from the urinary tract and staying infectionfree can prevent more struvite stones.
- Cystine stones result from a genetic disorder that causes cystine to leak through the kidneys and into the urine in high concentration, forming crystals that tend to accumulate into stones.
How are kidney stones in children diagnosed?
The process of diagnosing any illness begins
with consideration of the symptoms. Pain
or bloody urine may be the first symptom.
Urine, blood, and imaging tests will help
determine whether symptoms are caused by
a stone. Urine tests can be used to check
for infection and for substances that form
stones. Blood tests can be used to check for
biochemical problems that can lead to kidney
stones. Various imaging techniques can be
used to locate the stone:
- Ultrasound uses a device, called a transducer, that bounces safe, painless sound waves off organs to create an image of their structure. An abdominal ultrasound can create images of the entire urinary tract. The procedure is performed in a health care provider’s office, outpatient center, or hospital by a specially trained technician, and the images are interpreted by a radiologist—a doctor who specializes in medical imaging; anesthesia is not needed. The images can show the location of any stones. This test does not expose children to radiation, unlike some other imaging tests. Although other tests are more useful in detecting very small stones or stones in the lower portion of the ureter, ultrasound is considered by many health care providers to be the best screening test to look for stones.
- Computerized tomography (CT) scans use a combination of x rays and computer technology to create threedimensional (3-D) images. A CT scan may include the injection of a special dye, called contrast medium. CT scans require the child to lie on a table that slides into a tunnel-shaped device where the x rays are taken. The procedure is performed in an outpatient center or hospital by an x-ray technician, and the images are interpreted by a radiologist; anesthesia is not needed. CT scans may be required to get an accurate stone count when children are being considered for urologic surgery. Because CT scans expose children to a moderate amount of radiation, health care providers try to reduce radiation exposure in children by avoiding repeated CT scans, restricting the area scanned as much as possible, and using the lowest radiation dose that will provide the needed diagnostic information.
- X-ray machines use radiation to create images of the child’s urinary tract. The images can be taken at an outpatient center or hospital by an x-ray technician, and the images are interpreted by a radiologist; anesthesia is not needed. The x rays are used to locate many kinds of stones. A conventional x ray is generally less informative than an ultrasound or CT scan, but it is less expensive and can be done more quickly than other imaging procedures.
How are kidney stones in children treated?
The treatment for a kidney stone usually
depends on its size and what it is made of,
as well as whether it is causing symptoms of
pain or obstructing the urinary tract. Small
stones usually pass through the urinary tract
without treatment. Still, children will often
require pain control and encouragement to
drink lots of fluids to help move the stone
along. Pain control may consist of oral or
intravenous (IV) medication, depending
on the duration and severity of the pain.
IV fluids may be needed if the child becomes
dehydrated from vomiting or an inability to
drink. A child with a larger stone, or one
that blocks urine flow and causes great pain,
may need to be hospitalized for more urgent
treatment. Hospital treatments may include
- Shock wave lithotripsy (SWL). A machine called a lithotripter is used by the doctor to crush the kidney stone. In SWL, the child lies on a table or, less commonly, in a tub of water above the lithotripter. The lithotripter generates shock waves that pass through the child’s body to break the kidney stone into smaller particles to pass more readily through the urinary tract. Children younger than age 12 may receive general anesthesia during the procedure. Older children usually receive an IV sedative and pain medication.
- Removal of the stone with a ureteroscope.
A ureteroscope is a long, tubelike
instrument used to visualize the
urinary tract. After the child receives a
sedative, the doctor inserts the ureteroscope
into the child’s urethra and slides
the scope through the bladder and into
the ureter. Through the ureteroscope,
which has a small basket attached to the
end, the doctor may be able to see and
remove the stone in the ureter.
- Lithotripsy with a ureteroscope.
Another way to treat a kidney stone
through a ureteroscope is to extend a
flexible fiber through the scope up to
the stone. The fiber is attached to a
laser generator. Instead of shock waves,
the fiber delivers a laser beam to break
the stone into smaller pieces that can
pass out of the body in the urine. The
child may receive general anesthesia or
- Percutaneous nephrolithotomy. In this
procedure, a tube is inserted directly
into the kidney through an incision in
the child’s back. Using a wire-thin viewing
instrument called a nephroscope,
the doctor locates and removes the
stone. For large stones, an ultrasonic
probe that acts as a lithotripter may
be needed to deliver shock waves that
break the stone into small pieces that
can be removed more easily. Children
receive general anesthesia for percutaneous
nephrolithotomy. Often, children
stay in the hospital for several days after
the procedure and may have a small
tube called a nephrostomy tube inserted
through the skin into the kidney. The
nephrostomy tube drains urine and
any residual stone fragments from the
kidney into a urine collection bag. The
tube usually is left in the kidney for
2 or 3 days while the child remains in
How are kidney stones in children prevented?
To prevent kidney stones, health care providers and their patients must understand what is causing the stones to form. Especially in children with suspected metabolic abnormalities or with recurrent stones, a 24-hour urine collection is obtained to measure daily urine volume and to determine if any underlying mineral abnormality is making a child more likely to form stones. Based on the analysis of the collected urine, the treatment can be individualized to address a metabolic problem.
In all circumstances, children should drink plenty of fluids to keep the urine diluted and flush away substances that could form kidney stones. Urine should be almost clear.
Eating, Diet, and Nutrition
Families may benefit from meeting with a
dietitian to learn how dietary management
can help in preventing stones. Depending on
the underlying cause of the stone formation,
medications may be necessary to prevent
recurrent stones. Dietary changes and medications
may be required for a long term or,
quite often, for life. Some common changes
include the following:
- Children who tend to make calcium oxalate stones or have hypercalciuria should eat a regular amount of dietary calcium and limit salt intake. A thiazide diuretic medication may be given to some children to reduce the amount of calcium leaking into the urine.
- Children who have large amounts of oxalate in the urine may need to limit foods high in oxalate, such as chocolate, peanut butter, and dark-colored soft drinks.
- Children who form uric acid or cystine stones may need extra potassium citrate or potassium carbonate in the form of a pill or liquid medication. Avoiding foods high in purines—such as meat, fish, and shellfish—may also help prevent uric acid stones.
Points to Remember
- A kidney stone is a solid piece of material that forms in a kidney when some substances that are normally found in the urine become highly concentrated.
- Kidney stones occur in infants, children, and teenagers from all races and ethnicities.
- Kidney stones in children are diagnosed using a combination of urine, blood, and imaging tests.
- The treatment for a kidney stone usually depends on its size and composition as well as whether it is causing symptoms of pain or obstructing the urinary tract.
- Small stones usually pass through the urinary tract without treatment. Still, children will often require pain control and encouragement to drink lots of fluids to help move the stone along.
- Children with larger stones, or stones that block urine flow and cause great pain, may need to be hospitalized for more urgent treatment.
- Hospital treatments may include shock wave lithotripsy (SWL), removal of the stone with a ureteroscope, lithotripsy with a ureteroscope, or percutaneous nephrolithotomy.
- To prevent recurrent kidney stones, health care providers and their patients must understand what is causing the stones to form.
- In all circumstances, children should drink plenty of fluids to keep the urine diluted and flush away substances that could form kidney stones. Urine should be almost clear.
Hope through Research
The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), one of the National Institutes of Health, supports research aimed at better understanding and preventing kidney stones in children. Researchers supported by the NIDDK have identified three proteins that inhibit the formation of calcium oxalate stones. Conventional urine tests do not provide information about the presence or absence of these proteins. Developing a test for these proteins that can be used in the clinical setting will help health care providers identify children at risk for stone formation so they can manage that risk.
Participants in clinical trials can play a more active role in their own health care, gain access to new research treatments before they are widely available, and help others by contributing to medical research. For information about current studies, visit www.ClinicalTrials.gov.
For More Information
National Kidney Foundation
30 East 33rd Street
New York, NY 10016
Phone: 1–800–622–9010 or 212–889–2210
Publications produced by the Clearinghouse are carefully reviewed by both NIDDK scientists and outside experts. This publication was reviewed by the following members of the American Society of Pediatric Nephrology Clinical Affairs Committee: Michael Somers, M.D., Children’s Hospital Boston; Deepa Chand, M.D., M.H.S.A., Akron Children’s Hospital; John Foreman, M.D., Duke University; Jeffrey Fadrowski, M.D., M.H.S., The Johns Hopkins University; Kevin Meyers, M.D., Children’s Hospital of Philadelphia; Greg Nelsen, M.S.S.W., University of Virginia Health System; Michelle Baum, M.D., Children’s Hospital Boston; and Ann Guillot, M.D., University of Vermont.
National Kidney and Urologic Diseases Information Clearinghouse
The National Kidney and Urologic Diseases Information Clearinghouse (NKUDIC) is a service of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). The NIDDK is part of the National Institutes of Health of the U.S. Department of Health and Human Services. Established in 1987, the Clearinghouse provides information about diseases of the kidneys and urologic system to people with kidney and urologic disorders and to their families, health care professionals, and the public. The NKUDIC answers inquiries, develops and distributes publications, and works closely with professional and patient organizations and Government agencies to coordinate resources about kidney and urologic diseases.
This publication is not copyrighted. The Clearinghouse encourages users of this publication to duplicate and distribute as many copies as desired.
NIH Publication No. 11–7383
Page last updated March 28, 2012 | 3.352564 |
Our last blog post in which we discussed our favourite animated films prompted us to think back to animations we have enjoyed from childhood. Some of the best loved animations of all time date back to the 1920s where Walt Disney and Warner Brothers revolutionised the industry. The earliest version of Mickey Mouse was created in 1928 followed closely by the Warner Brothers Cartoons in 1930. But what about even earlier than that? Where did animation begin, when and how?
So, we asked Kurobot to share his knowledge and teach us all about the origin of animation.
Motion in art can date back as far as Paleolithic cave paintings, a very very long time ago! Paintings of animals with many legs in various positions is noted as the first attempt at conveying motion. Other pieces include sequences and phases of movement in both animals and people painted onto bowls, plates and murals.
Animation before film dates back to the 1600s where numerous devices were used to display animated images. The magic lantern, invented 1650, used a translucent oil painting and a lamp to project images onto adjacent flat surfaces. It was often used to display monsters and demons to convince people they were witnessing supernatural events. (We have ours ready for Halloween)
The Thaumatrope, invented 1824, was one of the first devices to demonstrate the Phi phenomenon, the human (and Kurobot) brains ability to persistently perceive an image. The device used a small circular piece of card with different images on each side, strung onto a central cord. When the cord is spun between the fingers the images appear to merge into one creating a moving image.
Possibly the most well known origin of motion picture is the humble flip book. Invented 1868, a flip book features a sequence of animated images on the unbound edge of each page of the book. When bending the pages back and flicking through the book the images merge due to the rapid replacement of each image with the next, forming a short animation.
The silent era of animation began in the late 1800s with the production of short stop motion animations, the most famous of which, produced in 1920 was Felix the cat. Felix was the first merchandised cartoon character and became a household name.
1923 marked the beginning of the golden age of animation when a small studio “Laugh-o-grams” went bankrupt, and its owner, Walt Disney opened a new studio in Los Angeles. This was possibly one of the most significant events in the history of animation.
The first Disney productions include “The Alice Comedies” series, “Song Car Tunes” and “Dinner Time”, however the most notable breakthrough entitled “Steamboat Willie” featured an anthropomorphic mouse named “Mickey” neglecting his work on a steamboat to instead make music with the animals aboard the boat. This would mark the development of animation in our generation including TV, and CGI animation.
Image sources: Silent London, Education Eastman House, The Local, Good Comics, Animation Connection | 3.198822 |
The Sonata in B-flat minor, Op. 35, was written in 1839 and published the following year. Unusually, Chopin initially approved the Sonata fùnebre title, but later took out the adjective in the 3rd French edition. He described the work in an August 1839 letter to Julian Fontana thus: “Here I am writing a Sonata in B-flat minor, containing the march that you know. There is an allegro, then a Scherzo in E-flat minor, the march and short finale, perhaps 3 of my pages; the left hand in unison with the right, gossiping after the march.” As is apparent from this remark, the Funeral March was composed earlier, probably in 1837, as witnessed by an album leaf containing the first eight bars of the Trio and dated “Paris, 28. September 1837”. This movement was orchestrated by Henri Reber to be played in the Madeleine’s Church in Paris at Chopin’s own funeral in October 1849. The other three movements were concluded in the summer of 1839, in George Sand’s manor house at Nohant, right after their return from Majorca. While quickly gaining popularity, the work was misunderstood by critics from the very beginning. Thus, while Anton Rubinstein called the piece “Death poem”, Robert Schumann was baffled by it, admitting it possessed beauty, but apparently misunderstanding its musical ideas and the structure, since he referred to it as “four of Chopin’s maddest children under the same roof” and to the last movement, devoid of melody and clear key, as “a jeer, but not music”. It has been suggested that this sonata was modelled on Beethoven’s Sonata Op. 26 in A-flat major, also known as the “Funeral march”, which Chopin often played and taught.
Written five years after the Second Sonata and published in 1845, the Sonata in B minor, Op. 58, lies on the other side of the transition period that many see as pivotal in Chopin’s life. This work was completed a few months after the Berceuse, and was written in times of tranquillity and relatively good health. The largest of all of Chopin’s works for piano solo, it represents – together with the Fantasie and the 4th Ballade – the apotheosis of his creativity.
Called “the most beautiful nocturne of all” by A. Hedley, “ravishing” by J. Rink, “messianic” by K. Stromenger and “stunning” by H. Leichentritt, Chopin’s Barcarolle was also greatly admired by artists such as von Bülow and was found by M. Ravel to be “the synthesis of the expressive and sumptuous art of this great Slav”, and to express “languor in excessive joy” by A. Gide. The Barcarolle represents a case in point of Chopin’s ornamental genius. Ravel wrote: “Chopin was not content merely to revolutionize piano technique. His figurations are inspired. Through his brilliant passages one perceives profound, enchanting harmonies. Always there is the hidden meaning which is translated into poetry of intense despair.”
Chopin may have begun his work on the Barcarolle because he suddenly found himself with time on his hands, an idea of a trip to Italy in the autumn of 1845 having been cancelled due to the opposition of George Sand’s son, Maurice. The work carried over into the next year, which is when the piece was finalized and published. Originally the typical song of Venetian gondoliers, the barcarolle was often used in the Romantic period due to its exotic ambience and the 6/8 or 12/8 lilting rhythm. J. Chantavoine suggested that Chopin’s Barcarolle may have been a result of George Sand’s stories about Venice. Chopin constructed it formally as one of his nocturnes, in three sections, where the middle one draws particularly on the boat-song 12/8 rhythm and imagery. Harmonically, it is one of his most advanced works and it also explores trills in a way that Beethoven has done in his late sonatas.
© 2005 Robert Andres
Recorded at Potton Hall, UK, 17 - 24th June 2004
Produced by Philip Hobbs
Engineered by Julia Thomas
Post Production at Finesplice, UK
Photographs of Artur Pizarro by Sven Arnstein | 3.132896 |
Education That is Multicultural and Achievement (ETMA)
The Maryland State Department of Education implements a State Regulation (COMAR 13A.04.05), expanded in 1995 and revised in 2005, that requires all local school systems to infuse Education That Is Multicultural into instruction, curriculum, staff development, instructional resources, and school climate. It also requires the Maryland State Department of Education to incorporate multicultural education into its programs, publications,and assessments.
Education That Is Multicultural is defined as "a continuous, integrated, multicisciplinary process for educating all students about diversity and commonality. Diversity factors include, but a not limited to race, ethnicity, region , religion, gender, language, socioeconomic status, age, and individuals with disabilities. Education That is Multicultural prepares students to live, interact,and work creatively in an interdependent global society by focusing on mutual appreciation and respect. It is a process which is complemented by community and parent involvement in support of multicultural initiatives." | 3.13139 |
DefinitionBy Mayo Clinic staff
CLICK TO ENLARGE
A broken rib, or fractured rib, is a common injury that occurs when one of the bones in your rib cage breaks or cracks. The most common cause of broken ribs is trauma to the chest, such as from a fall, motor vehicle accident or impact during contact sports.
Many broken ribs are merely cracked. While still painful, cracked ribs aren't as potentially dangerous as ribs that have been broken. In these situations, a jagged piece of bone could damage major blood vessels or internal organs, such as the lungs.
In most cases, broken ribs heal on their own in one or two months. Adequate pain control is important, so you can continue to breathe deeply and avoid lung complications, such as pneumonia.
- Karlson KA. Rib fractures. http://www.uptodate.com/home/index.html. Accessed Feb. 7, 2011.
- Brunett PH, et al. Pulmonary trauma. In: Tintinalli JE, et al. Tintinalli's Emergency Medicine: A Comprehensive Study Guide. 7th ed. New York, N.Y.: The McGraw-Hill Companies; 2011. http://www.accessmedicine.com/content.aspx?aID=6389704. Accessed Feb. 5, 2011.
- Fractures. The Merck Manuals: The Merck Manual for Healthcare Professionals. http://www.merckmanuals.com/professional/sec21/ch309/ch309b.html#sec21-ch309-ch309b-141. Accessed Feb. 8, 2011.
- Preventing falls and related fractures. National Institute of Arthritis and Musculoskeletal and Skin Diseases. http://www.niams.nih.gov/Health_Info/Bone/Osteoporosis/Fracture/prevent_falls.asp. Accessed Feb. 8, 2011.
- Laskowski ER (expert opinion). Mayo Clinic, Rochester, Minn. Feb. 8, 2011. | 3.594705 |
Addison’s disease, also known as Adrenal Insufficiency or Hypocortisolism, is an endocrine disease that occurs when the adrenal glands do not produce enough of the hormone cortisol and sometimes, aldosterone. Discuss topics including symptoms and treatments for Addison’s disease.
This site is huge. Chapter 7 has more info on antibodies I'v ever read. Very technical, medical terminology. | 3.38083 |
Fragile X Syndrome (cont.)
A variety of professionals can help individuals with Fragile X and their families deal with symptoms of the disorder. Such assistance is usually most effective when provided by health care professionals experienced with Fragile X.
- Speech-language therapists can help people with Fragile X to
improve their pronunciation of words and sentences, slow down speech, and
use language more effectively. They may set up social or problem-solving
situations to help a child practice using language in meaningful ways. For
the minority of children who fail to develop functional speech, this type of
specialist may work with other specialists to design and teach nonverbal
ways of communication. For example, some children may prefer to use small
picture cards arranged on a key ring to express themselves; or they may
learn to use a hand-held computer that is programmed to "say" words and
phrases when a single key is pressed.
- Occupational therapists help find ways to adjust tasks and
conditions to match a person's needs and abilities. For example, this type
of therapist might teach parents to swaddle or massage their baby who has
Fragile X to calm him or her. Or the therapist might find a specially
designed computer mouse and keyboard or a pencil that is easier for a child
with poor motor control to grip. At the high school level, an occupational
therapist can help a teenager with Fragile X identify a job, career, or
skill that matches his or her interests and individual capabilities.17
- Physical therapists design activities and exercises to build
motor control and to improve posture and balance. They can teach parents
ways to exercise their baby's muscles. At school, a physical therapist may
help a child who is easily over-stimulated or who avoids body contact to
participate in sports and games with other children.
- Behavioral therapists try to identify why a child acts in
negative ways and then seek ways to prevent these distressing situations,
and to teach the child to cope with the distress. This type of specialist
also works with parents and teachers to find useful responses to desirable
and undesirable behavior. During puberty, rising and changing hormone levels
can cause adolescents to become more aggressive. A behavioral therapist can
help a teenager recognize his or her intense emotions and teach healthy ways
to calm down.
The services of these specialists may be available to pre-school and
school-aged children, as well as to teens, through the local public school
system. In a school setting, several specialists often work together to assess
each child's particular strengths and weaknesses, and to plan a program that is
specially tailored to meet the child's needs. These services are often free.
More intense and individualized help is available through private clinics, but
the family usually has to pay for private services, although some health
insurance plans may help cover the cost.
Viewers share their comments
Fragile X Syndrome - Educational Options
Question: Do you have a child with Fragile X? Please discuss the educational opportunities in which your child has participated.
Fragile X Syndrome - Social and Emotional Issues
Question: How does your child with Fragile X deal with social situations? Please share how you cope with your child's emotional issues.
Fragile X Syndrome - Experience
Question: Please describe your experience with fragile x syndrome.
Fragile X Syndrome - Symptoms
Question: What symptoms do you or your loved one experience from fragile X syndrome? How do you cope?
Fragile X Syndrome - Carrier
Question: Did you ever undergo genetic testing to see if you are a fragile X syndrome carrier? | 3.443243 |
Weight Gain & Cancer Risk
Medical Author: Melissa Conrad Stöppler, MD
Medical Editor: William C. Shiel Jr., MD, FACP, FACR
Excess weight is a known risk factor for many chronic diseases, such as diabetesand heart disease. Obesityhas also been linked an increased risk for developing some cancers. To clarify the effects of weight gain on cancer risk, researchers in 2007 conducted an analysis of many studies reported in medical journals that describe 282,137 cases of cancer. The researchers wanted to see if weight gain had an effect on the risk for certain cancer types.
In particular, the researchers looked at the risk of cancer associated with a weight gain corresponding to an increase of 5 kg/m2 in body mass index (BMI). In terms of actual pounds gained, a man with a normal-range BMI of 23 would need to gain 15 kg (33 lbs.) of weight, while a woman with a BMI of 23 would need to gain 13 kg (28.6 lbs.) to correspond to an increase of 5 in the BMI.
The results, published in the Lancet in February 2008, revealed that weight gain is positively associated with the risk of developing a variety of types of cancer as described below.
For women, a weight gain corresponding to an increase of 5 in the BMI resulted in a significant increase in risk for developing four cancer types:
- esophageal adenocarcinoma(double the risk),
- endometrial cancer(slightly more than double the risk),
- gallbladder cancer (slightly more than double the risk), and
- kidney (renal) cancer. | 3.099166 |
Our Colchicine Main Article provides a comprehensive look at the who, what, when and how of Colchicine
Definition of Colchicine
Colchicine: A substance found in a plant that is used in clinical medicine for the treatment of gouty arthritis and in the laboratory to arrest cells during cell division (by disrupting the spindle) so their chromosomes can be visualized. The name colchicine is from the Greek kolchikon meaning autumn crocus or meadow saffron, the plant from which colchicine was originally isolated.
Last Editorial Review: 4/27/2011 5:27:15 PM
Back to MedTerms online medical dictionary A-Z List
Need help identifying pills and medications?
Get the latest health and medical information delivered direct to your inbox FREE! | 3.345047 |
: the study of the conformation of the skull based on the belief that it is indicative of mental faculties and character
Study of the shape of the skull as an indication of mental abilities and character traits. Franz Joseph Gall stated the principle that each of the innate mental faculties is based in a specific brain region (organ), whose size reflects the faculty's prominence in a person and is reflected by the skull's surface. He examined the skulls of persons with particular traits (including criminal traits) for a feature he could identify with it. His followers Johann Kaspar Spurzheim (1776–1832) and George Combe (1788–1858) divided the scalp into areas they labeled with traits such as combativeness, cautiousness, and form perception. Though popular well into the 20th century, phrenology has been wholly discredited. | 3.569694 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.